Science.gov

Sample records for adjusted poisson regression

  1. Modelling of filariasis in East Java with Poisson regression and generalized Poisson regression models

    NASA Astrophysics Data System (ADS)

    Darnah

    2016-04-01

    Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.

  2. Estimation of count data using mixed Poisson, generalized Poisson and finite Poisson mixture regression models

    NASA Astrophysics Data System (ADS)

    Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura

    2014-06-01

    This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.

  3. Background stratified Poisson regression analysis of cohort data

    PubMed Central

    Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as ‘nuisance’ variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this ‘conditional’ regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  4. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  5. Partial covariate adjusted regression

    PubMed Central

    Şentürk, Damla; Nguyen, Danh V.

    2008-01-01

    Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed (Şentürk and Müller, 2005). The available data has been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set. PMID:20126296

  6. Analyzing Historical Count Data: Poisson and Negative Binomial Regression Models.

    ERIC Educational Resources Information Center

    Beck, E. M.; Tolnay, Stewart E.

    1995-01-01

    Asserts that traditional approaches to multivariate analysis, including standard linear regression techniques, ignore the special character of count data. Explicates three suitable alternatives to standard regression techniques, a simple Poisson regression, a modified Poisson regression, and a negative binomial model. (MJP)

  7. Poisson Regression Analysis of Illness and Injury Surveillance Data

    SciTech Connect

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson

  8. Geographically weighted Poisson regression for disease association mapping.

    PubMed

    Nakaya, T; Fotheringham, A S; Brunsdon, C; Charlton, M

    2005-09-15

    This paper describes geographically weighted Poisson regression (GWPR) and its semi-parametric variant as a new statistical tool for analysing disease maps arising from spatially non-stationary processes. The method is a type of conditional kernel regression which uses a spatial weighting function to estimate spatial variations in Poisson regression parameters. It enables us to draw surfaces of local parameter estimates which depict spatial variations in the relationships between disease rates and socio-economic characteristics. The method therefore can be used to test the general assumption made, often without question, in the global modelling of spatial data that the processes being modelled are stationary over space. Equally, it can be used to identify parts of the study region in which 'interesting' relationships might be occurring and where further investigation might be warranted. Such exceptions can easily be missed in traditional global modelling and therefore GWPR provides disease analysts with an important new set of statistical tools. We demonstrate the GWPR approach applied to a data set of working-age deaths in the Tokyo metropolitan area, Japan. The results indicate that there are significant spatial variations (that is, variation beyond that expected from random sampling) in the relationships between working-age mortality and occupational segregation and between working-age mortality and unemployment throughout the Tokyo metropolitan area and that, consequently, the application of traditional 'global' models would yield misleading results. PMID:16118814

  9. Collision prediction models using multivariate Poisson-lognormal regression.

    PubMed

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. PMID:19540972

  10. Mixed-effects Poisson regression analysis of adverse event reports

    PubMed Central

    Gibbons, Robert D.; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K.; Bhaumik, Dulal K.; Brown, C. Hendricks; Kapur, Kush; Marcus, Sue M.; Hur, Kwan; Mann, J. John

    2008-01-01

    SUMMARY A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622

  11. Estimation of adjusted rate differences using additive negative binomial regression.

    PubMed

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27073156

  12. Weather adjustment using seemingly unrelated regression

    SciTech Connect

    Noll, T.A.

    1995-05-01

    Seemingly unrelated regression (SUR) is a system estimation technique that accounts for time-contemporaneous correlation between individual equations within a system of equations. SUR is suited to weather adjustment estimations when the estimation is: (1) composed of a system of equations and (2) the system of equations represents either different weather stations, different sales sectors or a combination of different weather stations and different sales sectors. SUR utilizes the cross-equation error values to develop more accurate estimates of the system coefficients than are obtained using ordinary least-squares (OLS) estimation. SUR estimates can be generated using a variety of statistical software packages including MicroTSP and SAS.

  13. The relationship between truck accidents and geometric design of road sections: Poisson versus negative binomial regressions

    SciTech Connect

    Miaou, Shaw-Pin

    1993-07-01

    This paper evaluates the performance of Poisson and negative binomial (NB) regression models in establishing the relationship between truck accidents and geometric design of road sections. Three types of models are considered. Poisson regression, zero-inflated Poisson (ZIP) regression, and NB regression. Maximum likelihood (ML) method is used to estimate the unknown parameters of these models. Two other feasible estimators for estimating the dispersion parameter in the NB regression model are also examined: a moment estimator and a regression-based estimator. These models and estimators are evaluated based on their (1) estimated regression parameters, (2) overall goodness-of-fit, (3) estimated relative frequency of truck accident involvements across road sections, (4) sensitivity to the inclusion of short mad sections, and (5) estimated total number of truck accident involvements. Data from the highway Safety Information System (HSIS) are employed to examine the performance of these models in developing such relationships. The evaluation results suggest that the NB regression model estimated using the moment and regression-based methods should be used with caution. Also, under the ML method, the estimated regression parameters from all three models are quite consistent and no particular model outperforms the other two models in terms of the estimated relative frequencies of truck accident involvements across road sections. It is recommended that the Poisson regression model be used as an initial model for developing the relationship. If the overdispersion of accident data is found to be moderate or high, both the NB and ZIP regression model could be explored. Overall, the ZIP regression model appears to be a serious candidate model when data exhibit excess zeros due, e.g., to underreporting.

  14. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  15. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros. PMID:15607273

  16. Asbestos exposure and cancer mortality among petroleum refinery workers: a Poisson regression analysis of updated data.

    PubMed

    Montanaro, Fabio; Ceppi, Marcello; Puntoni, Riccardo; Silvano, Stefania; Gennaro, Valerio

    2004-04-01

    The authors investigated the relationship between asbestos exposure and respiratory cancer mortality among maintenance workers and other blue-collar workers at an Italian oil refinery. The cohort contained 931 men, 29,511 person-years, and 489 deaths. Poisson regression analysis using white-collar workers as an internal referent group provided relative risk estimates (RRs) for main causes of death, adjusted for age, age at hiring, calendar period, length of exposure, and latency. Among maintenance workers, RRs for all tumors (RR = 1.50), digestive system cancers (RR = 1.41), lung cancers (RR = 1.53), and nonmalignant respiratory diseases (RR = 1.71) were significantly increased (p < 0.05); no significant excess was found for all causes and among maintenance (RR = 1.12) and other blue-collar workers (RR = 1.01). Results confirm the increased risk of death from respiratory diseases and cancer among maintenance workers exposed to asbestos, whereas other smoking-related diseases (circulatory system) were not statistically different among groups. PMID:16189991

  17. Effect of air pollution on lung cancer: A poisson regression model based on vital statistics

    SciTech Connect

    Tango, Toshiro

    1994-11-01

    This article describes a Poisson regression model for time trends of mortality to detect the long-term effects of common levels of air pollution on lung cancer, in which the adjustment for cigarette smoking is not always necessary. The main hypothesis to be tested in the model is that if the long-term and common-level air pollution had an effect on lung cancer, the death rate from lung cancer could be expected to increase gradually at a higher rate in the region with relatively high levels of air pollution than in the region with low levels, and that this trend would not be expected for other control diseases in which cigarette smoking is a risk factor. Using this approach, we analyzed the trend of mortality in females aged 40 to 79, from lung cancer and two control diseases, ischemic heart disease and cerebrovascular disease, based on vital statistics in 23 wards of the Tokyo metropolitan area for 1972 to 1988. Ward-specific mean levels per day of SO{sub 2} and NO{sub 2} from 1974 through 1976 estimated by Makino (1978) were used as the ward-specific exposure measure of air pollution. No data on tobacco consumption in each ward is available. Our analysis supported the existence of long-term effects of air pollution on lung cancer. 14 refs., 5 figs., 2 tabs.

  18. A marginalized zero-inflated Poisson regression model with overall exposure effects.

    PubMed

    Long, D Leann; Preisser, John S; Herring, Amy H; Golin, Carol E

    2014-12-20

    The zero-inflated Poisson (ZIP) regression model is often employed in public health research to examine the relationships between exposures of interest and a count outcome exhibiting many zeros, in excess of the amount expected under sampling from a Poisson distribution. The regression coefficients of the ZIP model have latent class interpretations, which correspond to a susceptible subpopulation at risk for the condition with counts generated from a Poisson distribution and a non-susceptible subpopulation that provides the extra or excess zeros. The ZIP model parameters, however, are not well suited for inference targeted at marginal means, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. We develop a marginalized ZIP model approach for independent responses to model the population mean count directly, allowing straightforward inference for overall exposure effects and empirical robust variance estimation for overall log-incidence density ratios. Through simulation studies, the performance of maximum likelihood estimation of the marginalized ZIP model is assessed and compared with other methods of estimating overall exposure effects. The marginalized ZIP model is applied to a recent study of a motivational interviewing-based safer sex counseling intervention, designed to reduce unprotected sexual act counts. PMID:25220537

  19. Electronic monitoring device event modelling on an individual-subject basis using adaptive Poisson regression.

    PubMed

    Knafl, George J; Fennie, Kristopher P; Bova, Carol; Dieckhaus, Kevin; Williams, Ann B

    2004-03-15

    An adaptive approach to Poisson regression modelling is presented for analysing event data from electronic devices monitoring medication-taking. The emphasis is on applying this approach to data for individual subjects although it also applies to data for multiple subjects. This approach provides for visualization of adherence patterns as well as for objective comparison of actual device use with prescribed medication-taking. Example analyses are presented using data on openings of electronic pill bottle caps monitoring adherence of subjects with HIV undergoing highly active antiretroviral therapies. The modelling approach consists of partitioning the observation period, computing grouped event counts/rates for intervals in this partition, and modelling these event counts/rates in terms of elapsed time after entry into the study using Poisson regression. These models are based on adaptively selected sets of power transforms of elapsed time determined by rule-based heuristic search through arbitrary sets of parametric models, thereby effectively generating a smooth non-parametric regression fit to the data. Models are compared using k-fold likelihood cross-validation. PMID:14981675

  20. Poisson regression analysis of mortality among male workers at a thorium-processing plant

    SciTech Connect

    Liu, Zhiyuan; Lee, Tze-San; Kotek, T.J.

    1991-12-31

    Analyses of mortality among a cohort of 3119 male workers employed between 1915 and 1973 at a thorium-processing plant were updated to the end of 1982. Of the whole group, 761 men were deceased and 2161 men were still alive, while 197 men were lost to follow-up. A total of 250 deaths was added to the 511 deaths observed in the previous study. The standardized mortality ratio (SMR) for all causes of death was 1.12 with 95% confidence interval (CI) of 1.05-1.21. The SMRs were also significantly increased for all malignant neoplasms (SMR = 1.23, 95% CI = 1.04-1.43) and lung cancer (SMR = 1.36, 95% CI = 1.02-1.78). Poisson regression analysis was employed to evaluate the joint effects of job classification, duration of employment, time since first employment, age and year at first employment on mortality of all malignant neoplasms and lung cancer. A comparison of internal and external analyses with the Poisson regression model was also conducted and showed no obvious difference in fitting the data on lung cancer mortality of the thorium workers. The results of the multivariate analysis showed that there was no significant effect of all the study factors on mortality due to all malignant neoplasms and lung cancer. Therefore, further study is needed for the former thorium workers.

  1. Mixed-effects Poisson regression analysis of adverse event reports: the relationship between antidepressants and suicide.

    PubMed

    Gibbons, Robert D; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K; Bhaumik, Dulal K; Brown, C Hendricks; Kapur, Kush; Marcus, Sue M; Hur, Kwan; Mann, J John

    2008-05-20

    A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)'s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622

  2. Association between large strongyle genera in larval cultures--using rare-event poisson regression.

    PubMed

    Cao, X; Vidyashankar, A N; Nielsen, M K

    2013-09-01

    Decades of intensive anthelmintic treatment has caused equine large strongyles to become quite rare, while the cyathostomins have developed resistance to several drug classes. The larval culture has been associated with low to moderate negative predictive values for detecting Strongylus vulgaris infection. It is unknown whether detection of other large strongyle species can be statistically associated with presence of S. vulgaris. This remains a statistical challenge because of the rare occurrence of large strongyle species. This study used a modified Poisson regression to analyse a dataset for associations between S. vulgaris infection and simultaneous occurrence of Strongylus edentatus and Triodontophorus spp. In 663 horses on 42 Danish farms, the individual prevalences of S. vulgaris, S. edentatus and Triodontophorus spp. were 12%, 3% and 12%, respectively. Both S. edentatus and Triodontophorus spp. were significantly associated with S. vulgaris infection with relative risks above 1. Further, S. edentatus was associated with use of selective therapy on the farms, as well as negatively associated with anthelmintic treatment carried out within 6 months prior to the study. The findings illustrate that occurrence of S. vulgaris in larval cultures can be interpreted as indicative of other large strongyles being likely to be present. PMID:23731556

  3. Longevity Is Linked to Mitochondrial Mutation Rates in Rockfish: A Test Using Poisson Regression.

    PubMed

    Hua, Xia; Cowman, Peter; Warren, Dan; Bromham, Lindell

    2015-10-01

    The mitochondrial theory of ageing proposes that the cumulative effect of biochemical damage in mitochondria causes mitochondrial mutations and plays a key role in ageing. Numerous studies have applied comparative approaches to test one of the predictions of the theory: That the rate of mitochondrial mutations is negatively correlated with longevity. Comparative studies face three challenges in detecting correlates of mutation rate: Covariation of mutation rates between species due to ancestry, covariation between life-history traits, and difficulty obtaining accurate estimates of mutation rate. We address these challenges using a novel Poisson regression method to examine the link between mutation rate and lifespan in rockfish (Sebastes). This method has better performance than traditional sister-species comparisons when sister species are too recently diverged to give reliable estimates of mutation rate. Rockfish are an ideal model system: They have long life spans with indeterminate growth and little evidence of senescence, which minimizes the confounding tradeoffs between lifespan and fecundity. We show that lifespan in rockfish is negatively correlated to rate of mitochondrial mutation, but not the rate of nuclear mutation. The life history of rockfish allows us to conclude that this relationship is unlikely to be driven by the tradeoffs between longevity and fecundity, or by the frequency of DNA replications in the germline. Instead, the relationship is compatible with the hypothesis that mutation rates are reduced by selection in long-lived taxa to reduce the chance of mitochondrial damage over its lifespan, consistent with the mitochondrial theory of ageing. PMID:26048547

  4. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    NASA Astrophysics Data System (ADS)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  5. What's the Risk? A Simple Approach for Estimating Adjusted Risk Measures from Nonlinear Models Including Logistic Regression

    PubMed Central

    Kleinman, Lawrence C; Norton, Edward C

    2009-01-01

    Objective To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common. Study Design Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR. Data Collection Data sets produced using Monte Carlo simulations. Principal Findings Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases. Conclusions Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large. PMID:18793213

  6. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

    PubMed

    Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

    2015-08-01

    Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. PMID:26081677

  7. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia

    PubMed Central

    Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.

    2012-01-01

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408

  8. Zero-inflated generalized Poisson regression mixture model for mapping quantitative trait loci underlying count trait with many zeros.

    PubMed

    Cui, Yuehua; Yang, Wenzhao

    2009-01-21

    Phenotypes measured in counts are commonly observed in nature. Statistical methods for mapping quantitative trait loci (QTL) underlying count traits are documented in the literature. The majority of them assume that the count phenotype follows a Poisson distribution with appropriate techniques being applied to handle data dispersion. When a count trait has a genetic basis, "naturally occurring" zero status also reflects the underlying gene effects. Simply ignoring or miss-handling the zero data may lead to wrong QTL inference. In this article, we propose an interval mapping approach for mapping QTL underlying count phenotypes containing many zeros. The effects of QTLs on the zero-inflated count trait are modelled through the zero-inflated generalized Poisson regression mixture model, which can handle the zero inflation and Poisson dispersion in the same distribution. We implement the approach using the EM algorithm with the Newton-Raphson algorithm embedded in the M-step, and provide a genome-wide scan for testing and estimating the QTL effects. The performance of the proposed method is evaluated through extensive simulation studies. Extensions to composite and multiple interval mapping are discussed. The utility of the developed approach is illustrated through a mouse F(2) intercross data set. Significant QTLs are detected to control mouse cholesterol gallstone formation. PMID:18977361

  9. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model

    PubMed Central

    2013-01-01

    Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699

  10. Marginal regression models for clustered count data based on zero-inflated Conway-Maxwell-Poisson distribution with applications.

    PubMed

    Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath

    2016-06-01

    Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. PMID:26575079

  11. Assessing Longitudinal Change: Adjustment for Regression to the Mean Effects

    ERIC Educational Resources Information Center

    Rocconi, Louis M.; Ethington, Corinna A.

    2009-01-01

    Pascarella (J Coll Stud Dev 47:508-520, 2006) has called for an increase in use of longitudinal data with pretest-posttest design when studying effects on college students. However, such designs that use multiple measures to document change are vulnerable to an important threat to internal validity, regression to the mean. Herein, we discuss a…

  12. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, Anne B.; Lizarraga, Joy S.

    1996-01-01

    Statistical operations termed model-adjustment procedures can be used to incorporate local data into existing regression modes to improve the predication of urban-runoff quality. Each procedure is a form of regression analysis in which the local data base is used as a calibration data set; the resulting adjusted regression models can then be used to predict storm-runoff quality at unmonitored sites. Statistical tests of the calibration data set guide selection among proposed procedures.

  13. Coercively Adjusted Auto Regression Model for Forecasting in Epilepsy EEG

    PubMed Central

    Kim, Sun-Hee; Faloutsos, Christos; Yang, Hyung-Jeong

    2013-01-01

    Recently, data with complex characteristics such as epilepsy electroencephalography (EEG) time series has emerged. Epilepsy EEG data has special characteristics including nonlinearity, nonnormality, and nonperiodicity. Therefore, it is important to find a suitable forecasting method that covers these special characteristics. In this paper, we propose a coercively adjusted autoregression (CA-AR) method that forecasts future values from a multivariable epilepsy EEG time series. We use the technique of random coefficients, which forcefully adjusts the coefficients with −1 and 1. The fractal dimension is used to determine the order of the CA-AR model. We applied the CA-AR method reflecting special characteristics of data to forecast the future value of epilepsy EEG data. Experimental results show that when compared to previous methods, the proposed method can forecast faster and accurately. PMID:23710252

  14. Adjustment of regional regression equations for urban storm-runoff quality using at-site data

    USGS Publications Warehouse

    Barks, C.S.

    1996-01-01

    Regional regression equations have been developed to estimate urban storm-runoff loads and mean concentrations using a national data base. Four statistical methods using at-site data to adjust the regional equation predictions were developed to provide better local estimates. The four adjustment procedures are a single-factor adjustment, a regression of the observed data against the predicted values, a regression of the observed values against the predicted values and additional local independent variables, and a weighted combination of a local regression with the regional prediction. Data collected at five representative storm-runoff sites during 22 storms in Little Rock, Arkansas, were used to verify, and, when appropriate, adjust the regional regression equation predictions. Comparison of observed values of stormrunoff loads and mean concentrations to the predicted values from the regional regression equations for nine constituents (chemical oxygen demand, suspended solids, total nitrogen as N, total ammonia plus organic nitrogen as N, total phosphorus as P, dissolved phosphorus as P, total recoverable copper, total recoverable lead, and total recoverable zinc) showed large prediction errors ranging from 63 percent to more than several thousand percent. Prediction errors for 6 of the 18 regional regression equations were less than 100 percent and could be considered reasonable for water-quality prediction equations. The regression adjustment procedure was used to adjust five of the regional equation predictions to improve the predictive accuracy. For seven of the regional equations the observed and the predicted values are not significantly correlated. Thus neither the unadjusted regional equations nor any of the adjustments were appropriate. The mean of the observed values was used as a simple estimator when the regional equation predictions and adjusted predictions were not appropriate.

  15. Comparing the cancer in Ninawa during three periods (1980-1990, 1991-2000, 2001-2010) using Poisson regression

    PubMed Central

    AL-Hashimi, Muzahem Mohammed Yahya; Wang, XiangJun

    2013-01-01

    Background: Iraq fought three wars in three consecutive decades, Iran-Iraq war (1980-1988), Persian Gulf War in 1991, and the Iraq's war in 2003. In the nineties of the last century and up to the present time, there have been anecdotal reports of increase in cancer in Ninawa as in all provinces of Iraq, possibly as a result of exposure to depleted uranium used by American troops in the last two wars. This paper deals with cancer incidence in Ninawa, the most importance province in Iraq, where many of her sons were soldiers in the Iraqi army, and they have participated in the wars. Materials and Methods: The data was derived from the Directorate of Health in Ninawa. The data was divided into three sub periods: 1980-1990, 1991-2000, and 2001-2010. The analyses are performed using Poisson regressions. The response variable is the cancer incidence number. Cancer cases, age, sex, and years were considered as the explanatory variables. The logarithm of the population of Ninawa is used as an offset. The aim of this paper is to model the cancer incidence data and estimate the cancer incidence rate ratio (IRR) to illustrate the changes that have occurred of incidence cancer in Ninawa in these three periods. Results: There is evidence of a reduction in the cancer IRR in Ninawa in the third period as well as in the second period. Our analyses found that breast cancer remained the first common cancer; while the lung, trachea, and bronchus the second in spite of decreasing as dramatically. Modest increases in incidence of prostate, penis, and other male genitals for the duration of the study period and stability in incidence of colon in the second and third periods. Modest increases in incidence of placenta and metastatic tumors, while the highest increase was in leukemia in the third period relates to the second period but not to the first period. The cancer IRR in men was decreased from more than 33% than those of females in the first period, more than 39% in the second

  16. Comparison of the Properties of Regression and Categorical Risk-Adjustment Models

    PubMed Central

    Averill, Richard F.; Muldoon, John H.; Hughes, John S.

    2016-01-01

    Clinical risk-adjustment, the ability to standardize the comparison of individuals with different health needs, is based upon 2 main alternative approaches: regression models and clinical categorical models. In this article, we examine the impact of the differences in the way these models are constructed on end user applications. PMID:26945302

  17. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  18. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  19. A multivariate Poisson-lognormal regression model for prediction of crash counts by severity, using Bayesian methods.

    PubMed

    Ma, Jianming; Kockelman, Kara M; Damien, Paul

    2008-05-01

    Numerous efforts have been devoted to investigating crash occurrence as related to roadway design features, environmental factors and traffic conditions. However, most of the research has relied on univariate count models; that is, traffic crash counts at different levels of severity are estimated separately, which may neglect shared information in unobserved error terms, reduce efficiency in parameter estimates, and lead to potential biases in sample databases. This paper offers a multivariate Poisson-lognormal (MVPLN) specification that simultaneously models crash counts by injury severity. The MVPLN specification allows for a more general correlation structure as well as overdispersion. This approach addresses several questions that are difficult to answer when estimating crash counts separately. Thanks to recent advances in crash modeling and Bayesian statistics, parameter estimation is done within the Bayesian paradigm, using a Gibbs Sampler and the Metropolis-Hastings (M-H) algorithms for crashes on Washington State rural two-lane highways. Estimation results from the MVPLN approach show statistically significant correlations between crash counts at different levels of injury severity. The non-zero diagonal elements suggest overdispersion in crash counts at all levels of severity. The results lend themselves to several recommendations for highway safety treatments and design policies. For example, wide lanes and shoulders are key for reducing crash frequencies, as are longer vertical curves. PMID:18460364

  20. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    PubMed

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification. PMID:26520484

  1. The Spatial Distribution of Hepatitis C Virus Infections and Associated Determinants—An Application of a Geographically Weighted Poisson Regression for Evidence-Based Screening Interventions in Hotspots

    PubMed Central

    Kauhl, Boris; Heil, Jeanne; Hoebe, Christian J. P. A.; Schweikart, Jürgen; Krafft, Thomas; Dukers-Muijrers, Nicole H. T. M.

    2015-01-01

    Background Hepatitis C Virus (HCV) infections are a major cause for liver diseases. A large proportion of these infections remain hidden to care due to its mostly asymptomatic nature. Population-based screening and screening targeted on behavioural risk groups had not proven to be effective in revealing these hidden infections. Therefore, more practically applicable approaches to target screenings are necessary. Geographic Information Systems (GIS) and spatial epidemiological methods may provide a more feasible basis for screening interventions through the identification of hotspots as well as demographic and socio-economic determinants. Methods Analysed data included all HCV tests (n = 23,800) performed in the southern area of the Netherlands between 2002–2008. HCV positivity was defined as a positive immunoblot or polymerase chain reaction test. Population data were matched to the geocoded HCV test data. The spatial scan statistic was applied to detect areas with elevated HCV risk. We applied global regression models to determine associations between population-based determinants and HCV risk. Geographically weighted Poisson regression models were then constructed to determine local differences of the association between HCV risk and population-based determinants. Results HCV prevalence varied geographically and clustered in urban areas. The main population at risk were middle-aged males, non-western immigrants and divorced persons. Socio-economic determinants consisted of one-person households, persons with low income and mean property value. However, the association between HCV risk and demographic as well as socio-economic determinants displayed strong regional and intra-urban differences. Discussion The detection of local hotspots in our study may serve as a basis for prioritization of areas for future targeted interventions. Demographic and socio-economic determinants associated with HCV risk show regional differences underlining that a one

  2. Short-Term Effects of Climatic Variables on Hand, Foot, and Mouth Disease in Mainland China, 2008–2013: A Multilevel Spatial Poisson Regression Model Accounting for Overdispersion

    PubMed Central

    Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying

    2016-01-01

    Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic

  3. A Control Chart Based on Cluster-Regression Adjustment for Retrospective Monitoring of Individual Characteristics

    PubMed Central

    Ong, Hong Choon; Alih, Ekele

    2015-01-01

    The tendency for experimental and industrial variables to include a certain proportion of outliers has become a rule rather than an exception. These clusters of outliers, if left undetected, have the capability to distort the mean and the covariance matrix of the Hotelling’s T2 multivariate control charts constructed to monitor individual quality characteristics. The effect of this distortion is that the control chart constructed from it becomes unreliable as it exhibits masking and swamping, a phenomenon in which an out-of-control process is erroneously declared as an in-control process or an in-control process is erroneously declared as out-of-control process. To handle these problems, this article proposes a control chart that is based on cluster-regression adjustment for retrospective monitoring of individual quality characteristics in a multivariate setting. The performance of the proposed method is investigated through Monte Carlo simulation experiments and historical datasets. Results obtained indicate that the proposed method is an improvement over the state-of-art methods in terms of outlier detection as well as keeping masking and swamping rate under control. PMID:25923739

  4. Effect of Nutritional Habits on Dental Caries in Permanent Dentition among Schoolchildren Aged 10–12 Years: A Zero-Inflated Generalized Poisson Regression Model Approach

    PubMed Central

    ALMASI, Afshin; RAHIMIFOROUSHANI, Abbas; ESHRAGHIAN, Mohammad Reza; MOHAMMAD, Kazem; PASDAR, Yahya; TARRAHI, Mohammad Javad; MOGHIMBEIGI, Abbas; AHMADI JOUYBARI, Touraj

    2016-01-01

    Background: The aim of this study was to assess the associations between nutrition and dental caries in permanent dentition among schoolchildren. Methods: A cross-sectional survey was undertaken on 698 schoolchildren aged 10 to 12 yr from a random sample of primary schools in Kermanshah, western Iran, in 2014. The study was based on the data obtained from the questionnaire containing information on nutritional habits and the outcome of decayed/missing/filled teeth (DMFT) index. The association between predictors and dental caries was modeled using the Zero Inflated Generalized Poisson (ZIGP) regression model. Results: Fourteen percent of the children were caries free. The model was shown that in female children, the odds of being in a caries susceptible sub-group was 1.23 (95% CI: 1.08–1.51) times more likely than boys (P=0.041). Additionally, mean caries count in children who consumed the fizzy soft beverages and sweet biscuits more than once daily was 1.41 (95% CI: 1.19–1.63) and 1.27 (95% CI: 1.18–1.37) times more than children that were in category of less than 3 times a week or never, respectively. Conclusions: Girls were at a higher risk of caries than boys were. Since our study showed that nutritional status may have significant effect on caries in permanent teeth, we recommend that health promotion activities in school should be emphasized on healthful eating practices; especially limiting beverages containing sugar to only occasionally between meals. PMID:27141498

  5. Poisson`s ratio and crustal seismology

    SciTech Connect

    Christensen, N.I.

    1996-02-10

    This report discusses the use of Poisson`s ratio to place constraints on continental crustal composition. A summary of Poisson`s ratios for many common rock formations is also included with emphasis on igneous and metamorphic rock properties.

  6. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population.

    PubMed

    Silva, Fabyano Fonseca; Tunin, Karen P; Rosa, Guilherme J M; da Silva, Marcos V B; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-10-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  7. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    PubMed Central

    Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-01-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  8. Verification and adjustment of regional regression models for urban storm-runoff quality using data collected in Little Rock, Arkansas

    USGS Publications Warehouse

    Barks, C.S.

    1995-01-01

    Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of

  9. Quantile Regression Adjusting for Dependent Censoring from Semi-Competing Risks

    PubMed Central

    Li, Ruosha; Peng, Limin

    2014-01-01

    Summary In this work, we study quantile regression when the response is an event time subject to potentially dependent censoring. We consider the semi-competing risks setting, where time to censoring remains observable after the occurrence of the event of interest. While such a scenario frequently arises in biomedical studies, most of current quantile regression methods for censored data are not applicable because they generally require the censoring time and the event time be independent. By imposing rather mild assumptions on the association structure between the time-to-event response and the censoring time variable, we propose quantile regression procedures, which allow us to garner a comprehensive view of the covariate effects on the event time outcome as well as to examine the informativeness of censoring. An efficient and stable algorithm is provided for implementing the new method. We establish the asymptotic properties of the resulting estimators including uniform consistency and weak convergence. The theoretical development may serve as a useful template for addressing estimating settings that involve stochastic integrals. Extensive simulation studies suggest that the proposed method performs well with moderate sample sizes. We illustrate the practical utility of our proposals through an application to a bone marrow transplant trial. PMID:25574152

  10. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  11. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    PubMed

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892025

  12. Methods for Adjusting U.S. Geological Survey Rural Regression Peak Discharges in an Urban Setting

    USGS Publications Warehouse

    Moglen, Glenn E.; Shivers, Dorianne E.

    2006-01-01

    A study was conducted of 78 U.S. Geological Survey gaged streams that have been subjected to varying degrees of urbanization over the last three decades. Flood-frequency analysis coupled with nonlinear regression techniques were used to generate a set of equations for converting peak discharge estimates determined from rural regression equations to a set of peak discharge estimates that represent known urbanization. Specifically, urban regression equations for the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year return periods were calibrated as a function of the corresponding rural peak discharge and the percentage of impervious area in a watershed. The results of this study indicate that two sets of equations, one set based on imperviousness and one set based on population density, performed well. Both sets of equations are dependent on rural peak discharges, a measure of development (average percentage of imperviousness or average population density), and a measure of homogeneity of development within a watershed. Average imperviousness was readily determined by using geographic information system methods and commonly available land-cover data. Similarly, average population density was easily determined from census data. Thus, a key advantage to the equations developed in this study is that they do not require field measurements of watershed characteristics as did the U.S. Geological Survey urban equations developed in an earlier investigation. During this study, the U.S. Geological Survey PeakFQ program was used as an integral tool in the calibration of all equations. The scarcity of historical land-use data, however, made exclusive use of flow records necessary for the 30-year period from 1970 to 2000. Such relatively short-duration streamflow time series required a nonstandard treatment of the historical data function of the PeakFQ program in comparison to published guidelines. Thus, the approach used during this investigation does not fully comply with the

  13. Integrated analysis of transcriptomic and proteomic data of Desulfovibrio vulgaris: Zero-Inflated Poisson regression models to predict abundance of undetected proteins

    SciTech Connect

    Nie, Lei; Wu, Gang; Brockman, Fred J.; Zhang, Weiwen

    2006-05-04

    Abstract Advances in DNA microarray and proteomics technologies have enabled high-throughput measurement of mRNA expression and protein abundance. Parallel profiling of mRNA and protein on a global scale and integrative analysis of these two data types could provide additional insight into the metabolic mechanisms underlying complex biological systems. However, because protein abundance and mRNA expression are affected by many cellular and physical processes, there have been conflicting results on the correlation of these two measurements. In addition, as current proteomic methods can detect only a small fraction of proteins present in cells, no correlation study of these two data types has been done thus far at the whole-genome level. In this study, we describe a novel data-driven statistical model to integrate whole-genome microarray and proteomic data collected from Desulfovibrio vulgaris grown under three different conditions. Based on the Poisson distribution pattern of proteomic data and the fact that a large number of proteins were undetected (excess zeros), Zero-inflated Poisson models were used to define the correlation pattern of mRNA and protein abundance. The models assumed that there is a probability mass at zero representing some of the undetected proteins because of technical limitations. The models thus use abundance measurements of transcripts and proteins experimentally detected as input to generate predictions of protein abundances as output for all genes in the genome. We demonstrated the statistical models by comparatively analyzing D. vulgaris grown on lactate-based versus formate-based media. The increased expressions of Ech hydrogenase and alcohol dehydrogenase (Adh)-periplasmic Fe-only hydrogenase (Hyd) pathway for ATP synthesis were predicted for D. vulgaris grown on formate.

  14. Adjustments to de Leva-anthropometric regression data for the changes in body proportions in elderly humans.

    PubMed

    Ho Hoang, Khai-Long; Mombaur, Katja

    2015-10-15

    Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. PMID:26338096

  15. The performance of automated case-mix adjustment regression model building methods in a health outcome prediction setting.

    PubMed

    Jen, Min-Hua; Bottle, Alex; Kirkwood, Graham; Johnston, Ron; Aylin, Paul

    2011-09-01

    We have previously described a system for monitoring a number of healthcare outcomes using case-mix adjustment models. It is desirable to automate the model fitting process in such a system if monitoring covers a large number of outcome measures or subgroup analyses. Our aim was to compare the performance of three different variable selection strategies: "manual", "automated" backward elimination and re-categorisation, and including all variables at once, irrespective of their apparent importance, with automated re-categorisation. Logistic regression models for predicting in-hospital mortality and emergency readmission within 28 days were fitted to an administrative database for 78 diagnosis groups and 126 procedures from 1996 to 2006 for National Health Services hospital trusts in England. The performance of models was assessed with Receiver Operating Characteristic (ROC) c statistics, (measuring discrimination) and Brier score (assessing the average of the predictive accuracy). Overall, discrimination was similar for diagnoses and procedures and consistently better for mortality than for emergency readmission. Brier scores were generally low overall (showing higher accuracy) and were lower for procedures than diagnoses, with a few exceptions for emergency readmission within 28 days. Among the three variable selection strategies, the automated procedure had similar performance to the manual method in almost all cases except low-risk groups with few outcome events. For the rapid generation of multiple case-mix models we suggest applying automated modelling to reduce the time required, in particular when examining different outcomes of large numbers of procedures and diseases in routinely collected administrative health data. PMID:21556848

  16. Investigation of the association between the test day milk fat-protein ratio and clinical mastitis using a Poisson regression approach for analysis of time-to-event data.

    PubMed

    Zoche-Golob, V; Heuwieser, W; Krömker, V

    2015-09-01

    The objective of the present study was to investigate the association between the milk fat-protein ratio and the incidence rate of clinical mastitis including repeated cases of clinical mastitis to determine the usefulness of this association to monitor metabolic disorders as risk factors for udder health. Herd records from 10 dairy herds of Holstein cows in Saxony, Germany, from September 2005-2011 (36,827 lactations of 17,657 cows) were used for statistical analysis. A mixed Poisson regression model with the weekly incidence rate of clinical mastitis as outcome variable was fitted. The model included repeated events of the outcome, time-varying covariates and multilevel clustering. Because the recording of clinical mastitis might have been imperfect, a probabilistic bias analysis was conducted to assess the impact of the misclassification of clinical mastitis on the conventional results. The lactational incidence of clinical mastitis was 38.2%. In 36.2% and 34.9% of the lactations, there was at least one dairy herd test day with a fat-protein ratio of <1.0 or >1.5, respectively. Misclassification of clinical mastitis was assumed to have resulted in bias towards the null. A clinical mastitis case increased the incidence rate of following cases of the same cow. Fat-protein ratios of <1.0 and >1.5 were associated with higher incidence rates of clinical mastitis depending on week in milk. The effect of a fat-protein ratio >1.5 on the incidence rate of clinical mastitis increased considerably over the course of lactation, whereas the effect of a fat-protein ratio <1.0 decreased. Fat-protein ratios <1.0 or >1.5 on the precedent test days of all cows irrespective of their time in milk seemed to be better predictors for clinical mastitis than the first test day results per lactation. PMID:26164530

  17. Small-Sample Adjustments for Tests of Moderators and Model Fit in Robust Variance Estimation in Meta-Regression

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Pustejovsky, James E.

    2015-01-01

    Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…

  18. R Squared Shrinkage in Multiple Regression Research: An Empirical Evaluation of Use and Impact of Adjusted Effect Formulae.

    ERIC Educational Resources Information Center

    Thatcher, Greg W.; Henson, Robin K.

    This study examined research in training and development to determine effect size reporting practices. It focused on the reporting of corrected effect sizes in research articles using multiple regression analyses. When possible, researchers calculated corrected effect sizes and determine if the associated shrinkage could have impacted researcher…

  19. Poisson's spot with molecules

    SciTech Connect

    Reisinger, Thomas; Holst, Bodil; Patel, Amil A.; Smith, Henry I.; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo

    2009-05-15

    In the Poisson-spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. In this paper we report the observation of Poisson's spot using a beam of neutral deuterium molecules. The wavelength independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson's spot a promising candidate for applications ranging from the study of large molecule diffraction to patterning with molecules.

  20. Poisson's Spot with Molecules

    NASA Astrophysics Data System (ADS)

    Reisinger, Thomas; Patel, Amil; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo; Smith, Henry I.; Holst, Bodil

    2009-03-01

    In the Poisson-Spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. The Poisson spot is the last of the classical optics experiments to be realized with neutral matter waves. In this paper we report the observation of Poisson's Spot using a beam of neutral deuterium molecules. The wavelength-independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson's spot a promising candidate for applications ranging from the study of large-molecule diffraction and coherence in atom-lasers to patterning with large molecules.

  1. Data for and adjusted regional regression models of volume and quality of urban storm-water runoff in Boise and Garden City, Idaho, 1993-94

    USGS Publications Warehouse

    Kjelstrom, L.C.

    1995-01-01

    Previously developed U.S. Geological Survey regional regression models of runoff and 11 chemical constituents were evaluated to assess their suitability for use in urban areas in Boise and Garden City. Data collected in the study area were used to develop adjusted regional models of storm-runoff volumes and mean concentrations and loads of chemical oxygen demand, dissolved and suspended solids, total nitrogen and total ammonia plus organic nitrogen as nitrogen, total and dissolved phosphorus, and total recoverable cadmium, copper, lead, and zinc. Explanatory variables used in these models were drainage area, impervious area, land-use information, and precipitation data. Mean annual runoff volume and loads at the five outfalls were estimated from 904 individual storms during 1976 through 1993. Two methods were used to compute individual storm loads. The first method used adjusted regional models of storm loads and the second used adjusted regional models for mean concentration and runoff volume. For large storms, the first method seemed to produce excessively high loads for some constituents and the second method provided more reliable results for all constituents except suspended solids. The first method provided more reliable results for large storms for suspended solids.

  2. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  3. Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas Using an L-moment-Based, PRESS-Minimized, Residual-Adjusted Approach

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2009-01-01

    Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed

  4. Scaling the Poisson Distribution

    ERIC Educational Resources Information Center

    Farnsworth, David L.

    2014-01-01

    We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.

  5. Penalized count data regression with application to hospital stay after pediatric cardiac surgery

    PubMed Central

    Wang, Zhu; Ma, Shuangge; Zappitelli, Michael; Parikh, Chirag; Wang, Ching-Yun; Devarajan, Prasad

    2014-01-01

    Pediatric cardiac surgery may lead to poor outcomes such as acute kidney injury (AKI) and prolonged hospital length of stay (LOS). Plasma and urine biomarkers may help with early identification and prediction of these adverse clinical outcomes. In a recent multi-center study, 311 children undergoing cardiac surgery were enrolled to evaluate multiple biomarkers for diagnosis and prognosis of AKI and other clinical outcomes. LOS is often analyzed as count data, thus Poisson regression and negative binomial (NB) regression are common choices for developing predictive models. With many correlated prognostic factors and biomarkers, variable selection is an important step. The present paper proposes new variable selection methods for Poisson and NB regression. We evaluated regularized regression through penalized likelihood function. We first extend the elastic net (Enet) Poisson to two penalized Poisson regression: Mnet, a combination of minimax concave and ridge penalties; and Snet, a combination of smoothly clipped absolute deviation (SCAD) and ridge penalties. Furthermore, we extend the above methods to the penalized NB regression. For the Enet, Mnet, and Snet penalties (EMSnet), we develop a unified algorithm to estimate the parameters and conduct variable selection simultaneously. Simulation studies show that the proposed methods have advantages with highly correlated predictors, against some of the competing methods. Applying the proposed methods to the aforementioned data, it is discovered that early postoperative urine biomarkers including NGAL, IL18, and KIM-1 independently predict LOS, after adjusting for risk and biomarker variables. PMID:24742430

  6. Essential Variational Poisson Cohomology

    NASA Astrophysics Data System (ADS)

    De Sole, Alberto; Kac, Victor G.

    2012-08-01

    In our recent paper "The variational Poisson cohomology" (2011) we computed the dimension of the variational Poisson cohomology {{{H}^bullet_K({V})}} for any quasiconstant coefficient ℓ × ℓ matrix differential operator K of order N with invertible leading coefficient, provided that {{{V}}} is a normal algebra of differential functions over a linearly closed differential field. In the present paper we show that, for K skewadjoint, the {{{Z}}} -graded Lie superalgebra {{{H}^bullet_K({V})}} is isomorphic to the finite dimensional Lie superalgebra {{widetilde{H}(Nell,S)}} . We also prove that the subalgebra of "essential" variational Poisson cohomology, consisting of classes vanishing on the Casimirs of K, is zero. This vanishing result has applications to the theory of bi-Hamiltonian structures and their deformations. At the end of the paper we consider also the translation invariant case.

  7. Poisson's spot with molecules

    NASA Astrophysics Data System (ADS)

    Reisinger, Thomas; Patel, Amil A.; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo; Smith, Henry I.; Holst, Bodil

    2009-05-01

    In the Poisson-spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. In this paper we report the observation of Poisson’s spot using a beam of neutral deuterium molecules. The wavelength independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson’s spot a promising candidate for applications ranging from the study of large molecule diffraction to patterning with molecules.

  8. A regional classification scheme for estimating reference water quality in streams using land-use-adjusted spatial regression-tree analysis

    USGS Publications Warehouse

    Robertson, D.M.; Saad, D.A.; Heisey, D.M.

    2006-01-01

    Various approaches are used to subdivide large areas into regions containing streams that have similar reference or background water quality and that respond similarly to different factors. For many applications, such as establishing reference conditions, it is preferable to use physical characteristics that are not affected by human activities to delineate these regions. However, most approaches, such as ecoregion classifications, rely on land use to delineate regions or have difficulties compensating for the effects of land use. Land use not only directly affects water quality, but it is often correlated with the factors used to define the regions. In this article, we describe modifications to SPARTA (spatial regression-tree analysis), a relatively new approach applied to water-quality and environmental characteristic data to delineate zones with similar factors affecting water quality. In this modified approach, land-use-adjusted (residualized) water quality and environmental characteristics are computed for each site. Regression-tree analysis is applied to the residualized data to determine the most statistically important environmental characteristics describing the distribution of a specific water-quality constituent. Geographic information for small basins throughout the study area is then used to subdivide the area into relatively homogeneous environmental water-quality zones. For each zone, commonly used approaches are subsequently used to define its reference water quality and how its water quality responds to changes in land use. SPARTA is used to delineate zones of similar reference concentrations of total phosphorus and suspended sediment throughout the upper Midwestern part of the United States. ?? 2006 Springer Science+Business Media, Inc.

  9. Evaluation of a two-part regression calibration to adjust for dietary exposure measurement error in the Cox proportional hazards model: A simulation study.

    PubMed

    Agogo, George O; van der Voet, Hilko; Van't Veer, Pieter; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-07-01

    Dietary questionnaires are prone to measurement error, which bias the perceived association between dietary intake and risk of disease. Short-term measurements are required to adjust for the bias in the association. For foods that are not consumed daily, the short-term measurements are often characterized by excess zeroes. Via a simulation study, the performance of a two-part calibration model that was developed for a single-replicate study design was assessed by mimicking leafy vegetable intake reports from the multicenter European Prospective Investigation into Cancer and Nutrition (EPIC) study. In part I of the fitted two-part calibration model, a logistic distribution was assumed; in part II, a gamma distribution was assumed. The model was assessed with respect to the magnitude of the correlation between the consumption probability and the consumed amount (hereafter, cross-part correlation), the number and form of covariates in the calibration model, the percentage of zero response values, and the magnitude of the measurement error in the dietary intake. From the simulation study results, transforming the dietary variable in the regression calibration to an appropriate scale was found to be the most important factor for the model performance. Reducing the number of covariates in the model could be beneficial, but was not critical in large-sample studies. The performance was remarkably robust when fitting a one-part rather than a two-part model. The model performance was minimally affected by the cross-part correlation. PMID:27003183

  10. Graphical user interface for AMOS and POISSON

    SciTech Connect

    Swatloski, T.L.

    1993-03-02

    A graphical user interface (GUI) exists for building model geometry for the time-domain field code, AMOS. This GUI has recently been modified to build models and display the results of the Poisson electrostatic solver maintained by the Los Alamos Accelerator Code Group called POISSON. Included in the GUI is a 2-D graphic editor allowing interactive construction of the model geometry. Polygons may be created by entering points with the mouse, with text input, or by reading coordinates from a file. Circular arcs have recently been added. Once polygons are entered, points may be inserted, moved, or deleted. Materials can be assigned to polygons, and are represented by different colors. The unit scale may be adjusted as well as the viewport. A rectangular mesh may be generated for AMOS or a triangular mesh for POISSON. Potentials from POISSON are represented with a contour plot and the designer is able to mouse click anywhere on the model to display the potential value at that location. This was developed under the X windowing system using the Motif look and feel.

  11. ADJUSTMENT FOR HETEROGENEOUS GENETIC AND NON-GENETIC (CO)VARIANCE STRUCTURES ON TEST-DAY MODELS USING A TRANSFORMATION ON RANDOM REGRESSION EFFECT REGRESSORS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A method of accounting for differences in variation in components of test-day milk production records was developed. This method could improve the accuracy of genetic evaluations. A random regression model is used to analyze the data, then a transformation is applied to the random regression coeffic...

  12. The performance of functional methods for correcting non-Gaussian measurement error within Poisson regression: corrected excess risk of lung cancer mortality in relation to radon exposure among French uranium miners.

    PubMed

    Allodji, Rodrigue S; Thiébaut, Anne C M; Leuraud, Klervi; Rage, Estelle; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques

    2012-12-30

    A broad variety of methods for measurement error (ME) correction have been developed, but these methods have rarely been applied possibly because their ability to correct ME is poorly understood. We carried out a simulation study to assess the performance of three error-correction methods: two variants of regression calibration (the substitution method and the estimation calibration method) and the simulation extrapolation (SIMEX) method. Features of the simulated cohorts were borrowed from the French Uranium Miners' Cohort in which exposure to radon had been documented from 1946 to 1999. In the absence of ME correction, we observed a severe attenuation of the true effect of radon exposure, with a negative relative bias of the order of 60% on the excess relative risk of lung cancer death. In the main scenario considered, that is, when ME characteristics previously determined as most plausible from the French Uranium Miners' Cohort were used both to generate exposure data and to correct for ME at the analysis stage, all three error-correction methods showed a noticeable but partial reduction of the attenuation bias, with a slight advantage for the SIMEX method. However, the performance of the three correction methods highly depended on the accurate determination of the characteristics of ME. In particular, we encountered severe overestimation in some scenarios with the SIMEX method, and we observed lack of correction with the three methods in some other scenarios. For illustration, we also applied and compared the proposed methods on the real data set from the French Uranium Miners' Cohort study. PMID:22996087

  13. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Bi, Peng; Hiller, Janet

    2008-01-01

    This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

  14. Fractal Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-09-01

    The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.

  15. Poisson Spot with Magnetic Levitation

    ERIC Educational Resources Information Center

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-01-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  16. Poisson spot with magnetic levitation

    NASA Astrophysics Data System (ADS)

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-02-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  17. How does Poisson kriging compare to the popular BYM model for mapping disease risks?

    PubMed Central

    Goovaerts, Pierre; Gebreab, Samson

    2008-01-01

    Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1) it is easier to implement and less CPU intensive, and 2) it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR) models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM) model and Poisson kriging (point and area-to-area implementations) were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area) has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models). Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county geography becomes more

  18. ``Regressed experts'' as a new state in teachers' professional development: lessons from Computer Science teachers' adjustments to substantial changes in the curriculum

    NASA Astrophysics Data System (ADS)

    Liberman, Neomi; Ben-David Kolikant, Yifat; Beeri, Catriel

    2012-09-01

    Due to a program reform in Israel, experienced CS high-school teachers faced the need to master and teach a new programming paradigm. This situation served as an opportunity to explore the relationship between teachers' content knowledge (CK) and their pedagogical content knowledge (PCK). This article focuses on three case studies, with emphasis on one of them. Using observations and interviews, we examine how the teachers, we observed taught and what development of their teaching occurred as a result of their teaching experience, if at all. Our findings suggest that this situation creates a new hybrid state of teachers, which we term "regressed experts." These teachers incorporate in their professional practice some elements typical of novices and some typical of experts. We also found that these teachers' experience, although established when teaching a different CK, serve as a leverage to improve their knowledge and understanding of aspects of the new content.

  19. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. PMID:26686485

  20. Graded geometry and Poisson reduction

    SciTech Connect

    Cattaneo, A. S.; Zambon, M.

    2009-02-02

    The main result extends the Marsden-Ratiu reduction theorem in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof. Further, we provide an alternative algebraic proof for the main result.

  1. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  2. A generalized gyrokinetic Poisson solver

    SciTech Connect

    Lin, Z.; Lee, W.W.

    1995-03-01

    A generalized gyrokinetic Poisson solver has been developed, which employs local operations in the configuration space to compute the polarization density response. The new technique is based on the actual physical process of gyrophase-averaging. It is useful for nonlocal simulations using general geometry equilibrium. Since it utilizes local operations rather than the global ones such as FFT, the new method is most amenable to massively parallel algorithms.

  3. Hyperbolically Patterned 3D Graphene Metamaterial with Negative Poisson's Ratio and Superelasticity.

    PubMed

    Zhang, Qiangqiang; Xu, Xiang; Lin, Dong; Chen, Wenli; Xiong, Guoping; Yu, Yikang; Fisher, Timothy S; Li, Hui

    2016-03-16

    A hyperbolically patterned 3D graphene metamaterial (GM) with negative Poisson's ratio and superelasticity is highlighted. It is synthesized by a modified hydrothermal approach and subsequent oriented freeze-casting strategy. GM presents a tunable Poisson's ratio by adjusting the structural porosity, macroscopic aspect ratio (L/D), and freeze-casting conditions. Such a GM suggests promising applications as soft actuators, sensors, robust shock absorbers, and environmental remediation. PMID:26788692

  4. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. PMID:23379851

  5. Logistic Regression

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    The logistic regression originally is intended to explain the relationship between the probability of an event and a set of covariables. The model's coefficients can be interpreted via the odds and odds ratio, which are presented in introduction of the chapter. The observations are possibly got individually, then we speak of binary logistic regression. When they are grouped, the logistic regression is said binomial. In our presentation we mainly focus on the binary case. For statistical inference the main tool is the maximum likelihood methodology: we present the Wald, Rao and likelihoods ratio results and their use to compare nested models. The problems we intend to deal with are essentially the same as in multiple linear regression: testing global effect, individual effect, selection of variables to build a model, measure of the fitness of the model, prediction of new values… . The methods are demonstrated on data sets using R. Finally we briefly consider the binomial case and the situation where we are interested in several events, that is the polytomous (multinomial) logistic regression and the particular case of ordinal logistic regression.

  6. Tunable negative Poisson's ratio in hydrogenated graphene.

    PubMed

    Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming

    2016-09-21

    We perform molecular dynamics simulations to investigate the effect of hydrogenation on the Poisson's ratio of graphene. It is found that the value of the Poisson's ratio of graphene can be effectively tuned from positive to negative by varying the percentage of hydrogenation. Specifically, the Poisson's ratio decreases with an increase in the percentage of hydrogenation, and reaches a minimum value of -0.04 when the percentage of hydrogenation is about 50%. The Poisson's ratio starts to increase upon a further increase of the percentage of hydrogenation. The appearance of a minimum negative Poisson's ratio in the hydrogenated graphene is attributed to the suppression of the hydrogenation-induced ripples during the stretching of graphene. Our results demonstrate that hydrogenation is a valuable approach for tuning the Poisson's ratio from positive to negative in graphene. PMID:27536878

  7. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was

  8. Multivariate Risk Adjustment of Primary Care Patient Panels in a Public Health Setting: A Comparison of Statistical Models.

    PubMed

    Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani

    2016-01-01

    We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054

  9. Alternative Derivations for the Poisson Integral Formula

    ERIC Educational Resources Information Center

    Chen, J. T.; Wu, C. S.

    2006-01-01

    Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

  10. Metal [100] Nanowires with Negative Poisson's Ratio.

    PubMed

    Ho, Duc Tam; Kwon, Soon-Yong; Kim, Sung Youb

    2016-01-01

    When materials are under stretching, occurrence of lateral contraction of materials is commonly observed. This is because Poisson's ratio, the quantity describes the relationship between a lateral strain and applied strain, is positive for nearly all materials. There are some reported structures and materials having negative Poisson's ratio. However, most of them are at macroscale, and reentrant structures and rigid rotating units are the main mechanisms for their negative Poisson's ratio behavior. Here, with numerical and theoretical evidence, we show that metal [100] nanowires with asymmetric cross-sections such as rectangle or ellipse can exhibit negative Poisson's ratio behavior. Furthermore, the negative Poisson's ratio behavior can be further improved by introducing a hole inside the asymmetric nanowires. We show that the surface effect inducing the asymmetric stresses inside the nanowires is a main origin of the superior property. PMID:27282358

  11. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  12. Time series regression model for infectious disease and weather.

    PubMed

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. PMID:26188633

  13. Supervised Gamma Process Poisson Factorization

    SciTech Connect

    Anderson, Dylan Zachary

    2015-05-01

    This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.

  14. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  15. Negative Poisson's ratio materials via isotropic interactions.

    PubMed

    Rechtsman, Mikael C; Stillinger, Frank H; Torquato, Salvatore

    2008-08-22

    We show that under tension a classical many-body system with only isotropic pair interactions in a crystalline state can, counterintuitively, have a negative Poisson's ratio, or auxetic behavior. We derive the conditions under which the triangular lattice in two dimensions and lattices with cubic symmetry in three dimensions exhibit a negative Poisson's ratio. In the former case, the simple Lennard-Jones potential can give rise to auxetic behavior. In the latter case, a negative Poisson's ratio can be exhibited even when the material is constrained to be elastically isotropic. PMID:18764632

  16. Poisson's ratio of high-performance concrete

    SciTech Connect

    Persson, B.

    1999-10-01

    This article outlines an experimental and numerical study on Poisson's ratio of high-performance concrete subjected to air or sealed curing. Eight qualities of concrete (about 100 cylinders and 900 cubes) were studied, both young and in the mature state. The concretes contained between 5 and 10% silica fume, and two concretes in addition contained air-entrainment. Parallel studies of strength and internal relative humidity were carried out. The results indicate that Poisson's ratio of high-performance concrete is slightly smaller than that of normal-strength concrete. Analyses of the influence of maturity, type of aggregate, and moisture on Poisson's ratio are also presented. The project was carried out from 1991 to 1998.

  17. Magnetostrictive contribution to Poisson ratio of galfenol

    NASA Astrophysics Data System (ADS)

    Paes, V. Z. C.; Mosca, D. H.

    2013-09-01

    In this work we present a detailed study on the magnetostrictive contribution to Poisson ratio for samples under applied mechanical stress. Magnetic contributions to strain and Poisson ratio for cubic materials were derived by accounting elastic and magneto-elastic anisotropy contributions. We apply our theoretical results for a material of interest in magnetomechanics, namely, galfenol (Fe1-xGax). Our results show that there is a non-negligible magnetic contribution in the linear portion of the curve of stress versus strain. The rotation of the magnetization towards [110] crystallographic direction upon application of mechanical stress leads to an auxetic behavior, i.e., exhibiting Poisson ratio with negative values. This magnetic contribution to auxetic behavior provides a novel insight for the discussion of theoretical and experimental developments of materials that display unusual mechanical properties.

  18. A new inverse regression model applied to radiation biodosimetry

    PubMed Central

    Higueras, Manuel; Puig, Pedro; Ainsbury, Elizabeth A.; Rothkamm, Kai

    2015-01-01

    Biological dosimetry based on chromosome aberration scoring in peripheral blood lymphocytes enables timely assessment of the ionizing radiation dose absorbed by an individual. Here, new Bayesian-type count data inverse regression methods are introduced for situations where responses are Poisson or two-parameter compound Poisson distributed. Our Poisson models are calculated in a closed form, by means of Hermite and negative binomial (NB) distributions. For compound Poisson responses, complete and simplified models are provided. The simplified models are also expressible in a closed form and involve the use of compound Hermite and compound NB distributions. Three examples of applications are given that demonstrate the usefulness of these methodologies in cytogenetic radiation biodosimetry and in radiotherapy. We provide R and SAS codes which reproduce these examples. PMID:25663804

  19. A new bivariate negative binomial regression model

    NASA Astrophysics Data System (ADS)

    Faroughi, Pouya; Ismail, Noriszura

    2014-12-01

    This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.

  20. On the Burgers-Poisson equation

    NASA Astrophysics Data System (ADS)

    Grunert, K.; Nguyen, Khai T.

    2016-09-01

    In this paper, we prove the existence and uniqueness of weak entropy solutions to the Burgers-Poisson equation for initial data in L1 (R). In addition an Oleinik type estimate is established and some criteria on local smoothness and wave breaking for weak entropy solutions are provided.

  1. Easy Demonstration of the Poisson Spot

    ERIC Educational Resources Information Center

    Gluck, Paul

    2010-01-01

    Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and…

  2. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  3. Regression: A Bibliography.

    ERIC Educational Resources Information Center

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  4. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-01

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes. PMID:26747797

  5. Poisson filtering of laser ranging data

    NASA Technical Reports Server (NTRS)

    Ricklefs, Randall L.; Shelus, Peter J.

    1993-01-01

    The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.

  6. Stabilities for nonisentropic Euler-Poisson equations.

    PubMed

    Cheung, Ka Luen; Wong, Sen

    2015-01-01

    We establish the stabilities and blowup results for the nonisentropic Euler-Poisson equations by the energy method. By analysing the second inertia, we show that the classical solutions of the system with attractive forces blow up in finite time in some special dimensions when the energy is negative. Moreover, we obtain the stabilities results for the system in the cases of attractive and repulsive forces. PMID:25861676

  7. First- and second-order Poisson spots

    NASA Astrophysics Data System (ADS)

    Kelly, William R.; Shirley, Eric L.; Migdall, Alan L.; Polyakov, Sergey V.; Hendrix, Kurt

    2009-08-01

    Although Thomas Young is generally given credit for being the first to provide evidence against Newton's corpuscular theory of light, it was Augustin Fresnel who first stated the modern theory of diffraction. We review the history surrounding Fresnel's 1818 paper and the role of the Poisson spot in the associated controversy. We next discuss the boundary-diffraction-wave approach to calculating diffraction effects and show how it can reduce the complexity of calculating diffraction patterns. We briefly discuss a generalization of this approach that reduces the dimensionality of integrals needed to calculate the complete diffraction pattern of any order diffraction effect. We repeat earlier demonstrations of the conventional Poisson spot and discuss an experimental setup for demonstrating an analogous phenomenon that we call a "second-order Poisson spot." Several features of the diffraction pattern can be explained simply by considering the path lengths of singly and doubly bent paths and distinguishing between first- and second-order diffraction effects related to such paths, respectively.

  8. Poisson's ratio over two centuries: challenging hypotheses

    PubMed Central

    Greaves, G. Neville

    2013-01-01

    This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094

  9. Nonlocal Poisson-Fermi model for ionic solvent

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  10. On the singularity of the Vlasov-Poisson system

    SciTech Connect

    Zheng, Jian; Qin, Hong

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  11. On the Singularity of the Vlasov-Poisson System

    SciTech Connect

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  12. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution. PMID:27575084

  13. Adjustment disorder

    MedlinePlus

    American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, Va: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Rosenbaum ...

  14. Dual Poisson-Disk Tiling: an efficient method for distributing features on arbitrary surfaces.

    PubMed

    Li, Hongwei; Lo, Kui-Yip; Leung, Man-Kang; Fu, Chi-Wing

    2008-01-01

    This paper introduces a novel surface-modeling method to stochastically distribute features on arbitrary topological surfaces. The generated distribution of features follows the Poisson disk distribution, so we can have a minimum separation guarantee between features and avoid feature overlap. With the proposed method, we not only can interactively adjust and edit features with the help of the proposed Poisson disk map, but can also efficiently re-distribute features on object surfaces. The underlying mechanism is our dual tiling scheme, known as the Dual Poisson-Disk Tiling. First, we compute the dual of a given surface parameterization, and tile the dual surface by our specially-designed dual tiles; during the pre-processing, the Poisson disk distribution has been pre-generated on these tiles. By dual tiling, we can nicely avoid the problem of corner heterogeneity when tiling arbitrary parameterized surfaces, and can also reduce the tile set complexity. Furthermore, the dual tiling scheme is non-periodic, and we can also maintain a manageable tile set. To demonstrate the applicability of this technique, we explore a number of surface-modeling applications: pattern and shape distribution, bump-mapping, illustrative rendering, mold simulation, the modeling of separable features in texture and BTF, and the distribution of geometric textures in shell space. PMID:18599912

  15. Kernel Continuum Regression.

    PubMed

    Lee, Myung Hee; Liu, Yufeng

    2013-12-01

    The continuum regression technique provides an appealing regression framework connecting ordinary least squares, partial least squares and principal component regression in one family. It offers some insight on the underlying regression model for a given application. Moreover, it helps to provide deep understanding of various regression techniques. Despite the useful framework, however, the current development on continuum regression is only for linear regression. In many applications, nonlinear regression is necessary. The extension of continuum regression from linear models to nonlinear models using kernel learning is considered. The proposed kernel continuum regression technique is quite general and can handle very flexible regression model estimation. An efficient algorithm is developed for fast implementation. Numerical examples have demonstrated the usefulness of the proposed technique. PMID:24058224

  16. Ductile Titanium Alloy with Low Poisson's Ratio

    SciTech Connect

    Hao, Y. L.; Li, S. J.; Sun, B. B.; Sui, M. L.; Yang, R.

    2007-05-25

    We report a ductile {beta}-type titanium alloy with body-centered cubic (bcc) crystal structure having a low Poisson's ratio of 0.14. The almost identical ultralow bulk and shear moduli of {approx}24 GPa combined with an ultrahigh strength of {approx}0.9 GPa contribute to easy crystal distortion due to much-weakened chemical bonding of atoms in the crystal, leading to significant elastic softening in tension and elastic hardening in compression. The peculiar elastic and plastic deformation behaviors of the alloy are interpreted as a result of approaching the elastic limit of the bcc crystal under applied stress.

  17. Zero-inflated regression models for radiation-induced chromosome aberration data: A comparative study.

    PubMed

    Oliveira, María; Einbeck, Jochen; Higueras, Manuel; Ainsbury, Elizabeth; Puig, Pedro; Rothkamm, Kai

    2016-03-01

    Within the field of cytogenetic biodosimetry, Poisson regression is the classical approach for modeling the number of chromosome aberrations as a function of radiation dose. However, it is common to find data that exhibit overdispersion. In practice, the assumption of equidispersion may be violated due to unobserved heterogeneity in the cell population, which will render the variance of observed aberration counts larger than their mean, and/or the frequency of zero counts greater than expected for the Poisson distribution. This phenomenon is observable for both full- and partial-body exposure, but more pronounced for the latter. In this work, different methodologies for analyzing cytogenetic chromosomal aberrations datasets are compared, with special focus on zero-inflated Poisson and zero-inflated negative binomial models. A score test for testing for zero inflation in Poisson regression models under the identity link is also developed. PMID:26461836

  18. A technique for determining the Poisson`s ratio of thin films

    SciTech Connect

    Krulevitch, P.

    1996-04-18

    The theory and experimental approach for a new technique used to determine the Poisson`s ratio of thin films are presented. The method involves taking the ratio of curvatures of cantilever beams and plates micromachined out of the film of interest. Curvature is induced by a through-thickness variation in residual stress, or by depositing a thin film under residual stress onto the beams and plates. This approach is made practical by the fact that the two curvatures air, the only required experimental parameters, and small calibration errors cancel when the ratio is taken. To confirm the accuracy of the technique, it was tested on a 2.5 {mu}m thick film of single crystal silicon. Micromachined beams 1 mm long by 100 {mu} wide and plates 700 {mu}m by 700 {mu}m were coated with 35 nm of gold and the curvatures were measured with a scanning optical profilometer. For the orientation tested ([100] film normal, [011] beam axis, [0{bar 1}1] contraction direction) silicon`s Poisson`s ratio is 0.064, and the measured result was 0.066 {+-} 0.043. The uncertainty in this technique is due primarily to variation in the measured curvatures, and should range from {+-} 0.02 to 0.04 with proper measurement technique.

  19. A Poisson model for random multigraphs

    PubMed Central

    Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth

    2010-01-01

    Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690

  20. An empirical Bayesian and Buhlmann approach with non-homogenous Poisson process

    NASA Astrophysics Data System (ADS)

    Noviyanti, Lienda

    2015-12-01

    All general insurance companies in Indonesia have to adjust their current premium rates according to maximum and minimum limit rates in the new regulation established by the Financial Services Authority (Otoritas Jasa Keuangan / OJK). In this research, we estimated premium rate by means of the Bayesian and the Buhlmann approach using historical claim frequency and claim severity in a five-group risk. We assumed a Poisson distributed claim frequency and a Normal distributed claim severity. Particularly, we used a non-homogenous Poisson process for estimating the parameters of claim frequency. We found that estimated premium rates are higher than the actual current rate. Regarding to the OJK upper and lower limit rates, the estimates among the five-group risk are varied; some are in the interval and some are out of the interval.

  1. Adjustable microforceps.

    PubMed

    Bao, J Y

    1991-04-01

    The commonly used microforceps have a much greater opening distance and spring resistance than needed. A piece of plastic ring or rubber band can be used to adjust the opening distance and reduce most of the spring resistance, making the user feel more comfortable and less fatigued. PMID:2051437

  2. DG Poisson algebra and its universal enveloping algebra

    NASA Astrophysics Data System (ADS)

    Lü, JiaFeng; Wang, XingTing; Zhuang, GuangBin

    2016-05-01

    In this paper, we introduce the notions of differential graded (DG) Poisson algebra and DG Poisson module. Let $A$ be any DG Poisson algebra. We construct the universal enveloping algebra of $A$ explicitly, which is denoted by $A^{ue}$. We show that $A^{ue}$ has a natural DG algebra structure and it satisfies certain universal property. As a consequence of the universal property, it is proved that the category of DG Poisson modules over $A$ is isomorphic to the category of DG modules over $A^{ue}$. Furthermore, we prove that the notion of universal enveloping algebra $A^{ue}$ is well-behaved under opposite algebra and tensor product of DG Poisson algebras. Practical examples of DG Poisson algebras are given throughout the paper including those arising from differential geometry and homological algebra.

  3. Study of non-Hodgkin's lymphoma mortality associated with industrial pollution in Spain, using Poisson models

    PubMed Central

    Ramis, Rebeca; Vidal, Enrique; García-Pérez, Javier; Lope, Virginia; Aragonés, Nuria; Pérez-Gómez, Beatriz; Pollán, Marina; López-Abente, Gonzalo

    2009-01-01

    Background Non-Hodgkin's lymphomas (NHLs) have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER) is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs) associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model). Results Only proximity of paper industries to population centres (>2 km) could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27). Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of industrial pollution

  4. Anisotropy of Poisson's Ratio in Transversely Isotropic Rocks

    NASA Astrophysics Data System (ADS)

    Tokmakova, S. P.

    2008-06-01

    The Poisson's ratio of shales with different clay mineralogy and porosity and for many shale rocks around the world including brine-saturated Africa shales and sands, North Sea shales, gas- and brine-saturated Canadian carbonates were estimated from the values of Thomsen's parameters. Anisotropy of Poisson's ratio for a set of TI samples with "normal" and "anomalous" polarization", with "normal" values of Poisson's ratio and auxetic were calculated.

  5. Observational Studies: Matching or Regression?

    PubMed

    Brazauskas, Ruta; Logan, Brent R

    2016-03-01

    In observational studies with an aim of assessing treatment effect or comparing groups of patients, several approaches could be used. Often, baseline characteristics of patients may be imbalanced between groups, and adjustments are needed to account for this. It can be accomplished either via appropriate regression modeling or, alternatively, by conducting a matched pairs study. The latter is often chosen because it makes groups appear to be comparable. In this article we considered these 2 options in terms of their ability to detect a treatment effect in time-to-event studies. Our investigation shows that a Cox regression model applied to the entire cohort is often a more powerful tool in detecting treatment effect as compared with a matched study. Real data from a hematopoietic cell transplantation study is used as an example. PMID:26712591

  6. Shaft adjuster

    DOEpatents

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  7. Stochastic search with Poisson and deterministic resetting

    NASA Astrophysics Data System (ADS)

    Bhat, Uttam; De Bacco, Caterina; Redner, S.

    2016-08-01

    We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.

  8. Surface reconstruction through poisson disk sampling.

    PubMed

    Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue

    2015-01-01

    This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744

  9. Periodic Poisson model for beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Dohlus, M.; Henning, Ch.

    2016-03-01

    A method is described to solve the Poisson problem for a three dimensional source distribution that is periodic into one direction. Perpendicular to the direction of periodicity a free space (or open) boundary condition is realized. In beam physics, this approach allows us to calculate the space charge field of a continualized charged particle distribution with periodic pattern. The method is based on a particle-mesh approach with equidistant grid and fast convolution with a Green's function. The periodic approach uses only one period of the source distribution, but a periodic extension of the Green's function. The approach is numerically efficient and allows the investigation of periodic- and pseudoperiodic structures with period lengths that are small compared to the source dimensions, for instance of laser modulated beams or of the evolution of micro bunch structures. Applications for laser modulated beams are given.

  10. Efficient information transfer by Poisson neurons.

    PubMed

    Kostal, Lubomir; Shinomoto, Shigeru

    2016-06-01

    Recently, it has been suggested that certain neurons with Poissonian spiking statistics may communicate by discontinuously switching between two levels of firing intensity. Such a situation resembles in many ways the optimal information transmission protocol for the continuous-time Poisson channel known from information theory. In this contribution we employ the classical information-theoretic results to analyze the efficiency of such a transmission from different perspectives, emphasising the neurobiological viewpoint. We address both the ultimate limits, in terms of the information capacity under metabolic cost constraints, and the achievable bounds on performance at rates below capacity with fixed decoding error probability. In doing so we discuss optimal values of experimentally measurable quantities that can be compared with the actual neuronal recordings in a future effort. PMID:27106184

  11. Surface Reconstruction through Poisson Disk Sampling

    PubMed Central

    Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue

    2015-01-01

    This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744

  12. Multiple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter describes multiple linear regression, a statistical approach used to describe the simultaneous associations of several variables with one continuous outcome. Important steps in using this approach include estimation and inference, variable selection in model building, and assessing model fit. The special cases of regression with interactions among the variables, polynomial regression, regressions with categorical (grouping) variables, and separate slopes models are also covered. Examples in microbiology are used throughout. PMID:18450050

  13. NCCS Regression Test Harness

    2015-09-09

    The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.

  14. Orthogonal Regression and Equivariance.

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    Ordinary least-squares regression treats the variables asymmetrically, designating a dependent variable and one or more independent variables. When it is not obvious how to make this distinction, a researcher may prefer to use orthogonal regression, which treats the variables symmetrically. However, the usual procedure for orthogonal regression is…

  15. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  16. Ratios as a size adjustment in morphometrics.

    PubMed

    Albrecht, G H; Gelvin, B R; Hartman, S E

    1993-08-01

    Simple ratios in which a measurement variable is divided by a size variable are commonly used but known to be inadequate for eliminating size correlations from morphometric data. Deficiencies in the simple ratio can be alleviated by incorporating regression coefficients describing the bivariate relationship between the measurement and size variables. Recommendations have included: 1) subtracting the regression intercept to force the bivariate relationship through the origin (intercept-adjusted ratios); 2) exponentiating either the measurement or the size variable using an allometry coefficient to achieve linearity (allometrically adjusted ratios); or 3) both subtracting the intercept and exponentiating (fully adjusted ratios). These three strategies for deriving size-adjusted ratios imply different data models for describing the bivariate relationship between the measurement and size variables (i.e., the linear, simple allometric, and full allometric models, respectively). Algebraic rearrangement of the equation associated with each data model leads to a correctly formulated adjusted ratio whose expected value is constant (i.e., size correlation is eliminated). Alternatively, simple algebra can be used to derive an expected value function for assessing whether any proposed ratio formula is effective in eliminating size correlations. Some published ratio adjustments were incorrectly formulated as indicated by expected values that remain a function of size after ratio transformation. Regression coefficients incorporated into adjusted ratios must be estimated using least-squares regression of the measurement variable on the size variable. Use of parameters estimated by any other regression technique (e.g., major axis or reduced major axis) results in residual correlations between size and the adjusted measurement variable. Correctly formulated adjusted ratios, whose parameters are estimated by least-squares methods, do control for size correlations. The size-adjusted

  17. Weighted Hurdle Regression Method for Joint Modeling of Cardiovascular Events Likelihood and Rate in the U.S. Dialysis Population

    PubMed Central

    Şentürk, Damla; Dalrymple, Lorien S.; Mu, Yi; Nguyen, Danh V.

    2014-01-01

    SUMMARY We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (a) the probability of cardiovascular events, a binary process and (b) the rate of events once the realization is positive - when the ‘hurdle’ is crossed - using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting. PMID:24930810

  18. Weighted hurdle regression method for joint modeling of cardiovascular events likelihood and rate in the US dialysis population.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Mu, Yi; Nguyen, Danh V

    2014-11-10

    We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (i) the probability of cardiovascular events, a binary process, and (ii) the rate of events once the realization is positive-when the 'hurdle' is crossed-using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals, the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting. PMID:24930810

  19. Solves Poisson's Equation in Axizymmetric Geometry on a Rectangular Mesh

    1996-09-10

    DATHETA4.0 computes the magnetostatic field produced by multiple point current sources in the presence of perfect conductors in axisymmetric geometry. DATHETA4.0 has an interactive user interface and solves Poisson''s equation using the ADI method on a rectangular finite-difference mesh. DATHETA4.0 uncludes models specific to applied-B ion diodes.

  20. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  1. A Local Poisson Graphical Model for inferring networks from sequencing data.

    PubMed

    Allen, Genevera I; Liu, Zhandong

    2013-09-01

    Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research. PMID:23955777

  2. Poisson-Boltzmann-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-01

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  3. Poisson-Boltzmann-Nernst-Planck model.

    PubMed

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  4. Poisson-Boltzmann-Nernst-Planck model

    SciTech Connect

    Zheng Qiong; Wei Guowei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  5. Generalized HPC method for the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bardazzi, A.; Lugni, C.; Antuono, M.; Graziani, G.; Faltinsen, O. M.

    2015-10-01

    An efficient and innovative numerical algorithm based on the use of Harmonic Polynomials on each Cell of the computational domain (HPC method) has been recently proposed by Shao and Faltinsen (2014) [1], to solve Boundary Value Problem governed by the Laplace equation. Here, we extend the HPC method for the solution of non-homogeneous elliptic boundary value problems. The homogeneous solution, i.e. the Laplace equation, is represented through a polynomial function with harmonic polynomials while the particular solution of the Poisson equation is provided by a bi-quadratic function. This scheme has been called generalized HPC method. The present algorithm, accurate up to the 4th order, proved to be efficient, i.e. easy to be implemented and with a low computational effort, for the solution of two-dimensional elliptic boundary value problems. Furthermore, it provides an analytical representation of the solution within each computational stencil, which allows its coupling with existing numerical algorithms within an efficient domain-decomposition strategy or within an adaptive mesh refinement algorithm.

  6. Harmonic regression and scale stability.

    PubMed

    Lee, Yi-Hsuan; Haberman, Shelby J

    2013-10-01

    Monitoring a very frequently administered educational test with a relatively short history of stable operation imposes a number of challenges. Test scores usually vary by season, and the frequency of administration of such educational tests is also seasonal. Although it is important to react to unreasonable changes in the distributions of test scores in a timely fashion, it is not a simple matter to ascertain what sort of distribution is really unusual. Many commonly used approaches for seasonal adjustment are designed for time series with evenly spaced observations that span many years and, therefore, are inappropriate for data from such educational tests. Harmonic regression, a seasonal-adjustment method, can be useful in monitoring scale stability when the number of years available is limited and when the observations are unevenly spaced. Additional forms of adjustments can be included to account for variability in test scores due to different sources of population variations. To illustrate, real data are considered from an international language assessment. PMID:24092490

  7. Periodicity characterization of orbital prediction error and Poisson series fitting

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Zong; Chen, Lei; Tang, Guo-Jin

    2012-09-01

    Publicly available Two-Line Element Sets (TLE) contains no associated error or accuracy information. The historical-data-based method is a feasible choice for those objects only TLE data are available. Most of current TLE error analysis methods use polynomial fitting which cannot represent the periodic characteristics. This paper has presented a methodology for periodicity characterization and Poisson series fitting for orbital prediction error based on historical orbital data. As error-fitting function, the Poisson series can describe variation of error with respect to propagation duration and on-orbit position of objects. The Poisson coefficient matrices of each error components are fitted using least squares method. Effects of polynomial terms, trigonometric terms, and mixed terms of Poisson series are discussed. Substituting time difference and mean anomaly into the Poisson series one can obtain the error information at specific time. Four satellites (Cosmos-2251, GPS-62, SLOSHSAT, TelStar-10) from four orbital type (LEO, MEO, HEO, GEO, respectively) were selected as examples to demonstrate and validate the method. The results indicated that the periodic characteristics exist in all three components of four objects, especially HEO and MEO. The periodicity characterization and Poisson series fitting could improve accuracy of the orbit covariance information. The Poisson series is a common form for describing orbital prediction error, the commonly used polynomial fitting is a special case of the Poisson series fitting. The Poisson coefficient matrices can be obtained before close approach analysis. This method does not require any knowledge about how the state vectors are generated, so it can handle not only TLE data but also other orbit models and elements.

  8. Boundary Lax pairs from non-ultra-local Poisson algebras

    SciTech Connect

    Avan, Jean; Doikou, Anastasia

    2009-11-15

    We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example.

  9. Poisson Ratio of Epitaxial Germanium Films Grown on Silicon

    NASA Astrophysics Data System (ADS)

    Bharathan, Jayesh; Narayan, Jagdish; Rozgonyi, George; Bulman, Gary E.

    2013-01-01

    An accurate knowledge of elastic constants of thin films is important in understanding the effect of strain on material properties. We have used residual thermal strain to measure the Poisson ratio of Ge films grown on Si ⟨001⟩ substrates, using the sin2 ψ method and high-resolution x-ray diffraction. The Poisson ratio of the Ge films was measured to be 0.25, compared with the bulk value of 0.27. Our study indicates that use of Poisson ratio instead of bulk compliance values yields a more accurate description of the state of in-plane strain present in the film.

  10. On classification of discrete, scalar-valued Poisson brackets

    NASA Astrophysics Data System (ADS)

    Parodi, E.

    2012-10-01

    We address the problem of classifying discrete differential-geometric Poisson brackets (dDGPBs) of any fixed order on a target space of dimension 1. We prove that these Poisson brackets (PBs) are in one-to-one correspondence with the intersection points of certain projective hypersurfaces. In addition, they can be reduced to a cubic PB of the standard Volterra lattice by discrete Miura-type transformations. Finally, by improving a lattice consolidation procedure, we obtain new families of non-degenerate, vector-valued and first-order dDGPBs that can be considered in the framework of admissible Lie-Poisson group theory.

  11. Prediction in Multiple Regression.

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2000-01-01

    Presents the concept of prediction via multiple regression (MR) and discusses the assumptions underlying multiple regression analyses. Also discusses shrinkage, cross-validation, and double cross-validation of prediction equations and describes how to calculate confidence intervals around individual predictions. (SLD)

  12. Improved Regression Calibration

    ERIC Educational Resources Information Center

    Skrondal, Anders; Kuha, Jouni

    2012-01-01

    The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…

  13. Morse-Smale Regression

    PubMed Central

    Gerber, Samuel; Rübel, Oliver; Bremer, Peer-Timo; Pascucci, Valerio; Whitaker, Ross T.

    2012-01-01

    This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduce a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse-Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this paper introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to over-fitting. The Morse-Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse-Smale regression. Supplementary materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse-Smale complex approximation and additional tables for the climate-simulation study. PMID:23687424

  14. Morse–Smale Regression

    SciTech Connect

    Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.

    2012-01-19

    This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.

  15. Two-part zero-inflated negative binomial regression model for quantitative trait loci mapping with count trait.

    PubMed

    Moghimbeigi, Abbas

    2015-05-01

    Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. PMID:25728790

  16. Negative Poisson's ratios for extreme states of matter

    PubMed

    Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin

    2000-06-16

    Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices. PMID:10856209

  17. Tuning the Poisson's Ratio of Biomaterials for Investigating Cellular Response

    PubMed Central

    Meggs, Kyle; Qu, Xin; Chen, Shaochen

    2013-01-01

    Cells sense and respond to mechanical forces, regardless of whether the source is from a normal tissue matrix, an adjacent cell or a synthetic substrate. In recent years, cell response to surface rigidity has been extensively studied by modulating the elastic modulus of poly(ethylene glycol) (PEG)-based hydrogels. In the context of biomaterials, Poisson's ratio, another fundamental material property parameter has not been explored, primarily because of challenges involved in tuning the Poisson's ratio in biological scaffolds. Two-photon polymerization is used to fabricate suspended web structures that exhibit positive and negative Poisson's ratio (NPR), based on analytical models. NPR webs demonstrate biaxial expansion/compression behavior, as one or multiple cells apply local forces and move the structures. Unusual cell division on NPR structures is also demonstrated. This methodology can be used to tune the Poisson's ratio of several photocurable biomaterials and could have potential implications in the field of mechanobiology. PMID:24076754

  18. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  19. Boosted Beta Regression

    PubMed Central

    Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas

    2013-01-01

    Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706

  20. Lamb wave propagation in negative Poisson's ratio composites

    NASA Astrophysics Data System (ADS)

    Remillat, Chrystel; Wilcox, Paul; Scarpa, Fabrizio

    2008-03-01

    Lamb wave propagation is evaluated for cross-ply laminate composites exhibiting through-the-thickness negative Poisson's ratio. The laminates are mechanically modeled using the Classical Laminate Theory, while the propagation of Lamb waves is investigated using a combination of semi analytical models and Finite Element time-stepping techniques. The auxetic laminates exhibit well spaced bending, shear and symmetric fundamental modes, while featuring normal stresses for A 0 mode 3 times lower than composite laminates with positive Poisson's ratio.

  1. Bicrossed products induced by Poisson vector fields and their integrability

    NASA Astrophysics Data System (ADS)

    Djiba, Samson Apourewagne; Wade, Aïssa

    2016-01-01

    First we show that, associated to any Poisson vector field E on a Poisson manifold (M,π), there is a canonical Lie algebroid structure on the first jet bundle J1M which, depends only on the cohomology class of E. We then introduce the notion of a cosymplectic groupoid and we discuss the integrability of the first jet bundle into a cosymplectic groupoid. Finally, we give applications to Atiyah classes and L∞-algebras.

  2. Classification of linearly compact simple Nambu-Poisson algebras

    NASA Astrophysics Data System (ADS)

    Cantarini, Nicoletta; Kac, Victor G.

    2016-05-01

    We introduce the notion of a universal odd generalized Poisson superalgebra associated with an associative algebra A, by generalizing a construction made in the work of De Sole and Kac [Jpn. J. Math. 8, 1-145 (2013)]. By making use of this notion we give a complete classification of simple linearly compact (generalized) n-Nambu-Poisson algebras over an algebraically closed field of characteristic zero.

  3. George: Gaussian Process regression

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel

    2015-11-01

    George is a fast and flexible library, implemented in C++ with Python bindings, for Gaussian Process regression useful for accounting for correlated noise in astronomical datasets, including those for transiting exoplanet discovery and characterization and stellar population modeling.

  4. Multivariate Regression with Calibration*

    PubMed Central

    Liu, Han; Wang, Lie; Zhao, Tuo

    2014-01-01

    We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861

  5. Comparing regression methods for the two-stage clonal expansion model of carcinogenesis.

    PubMed

    Kaiser, J C; Heidenreich, W F

    2004-11-15

    In the statistical analysis of cohort data with risk estimation models, both Poisson and individual likelihood regressions are widely used methods of parameter estimation. In this paper, their performance has been tested with the biologically motivated two-stage clonal expansion (TSCE) model of carcinogenesis. To exclude inevitable uncertainties of existing data, cohorts with simple individual exposure history have been created by Monte Carlo simulation. To generate some similar properties of atomic bomb survivors and radon-exposed mine workers, both acute and protracted exposure patterns have been generated. Then the capacity of the two regression methods has been compared to retrieve a priori known model parameters from the simulated cohort data. For simple models with smooth hazard functions, the parameter estimates from both methods come close to their true values. However, for models with strongly discontinuous functions which are generated by the cell mutation process of transformation, the Poisson regression method fails to produce reliable estimates. This behaviour is explained by the construction of class averages during data stratification. Thereby, some indispensable information on the individual exposure history was destroyed. It could not be repaired by countermeasures such as the refinement of Poisson classes or a more adequate choice of Poisson groups. Although this choice might still exist we were unable to discover it. In contrast to this, the individual likelihood regression technique was found to work reliably for all considered versions of the TSCE model. PMID:15490436

  6. Regression versus No Regression in the Autistic Disorder: Developmental Trajectories

    ERIC Educational Resources Information Center

    Bernabei, P.; Cerquiglini, A.; Cortesi, F.; D' Ardia, C.

    2007-01-01

    Developmental regression is a complex phenomenon which occurs in 20-49% of the autistic population. Aim of the study was to assess possible differences in the development of regressed and non-regressed autistic preschoolers. We longitudinally studied 40 autistic children (18 regressed, 22 non-regressed) aged 2-6 years. The following developmental…

  7. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

    NASA Astrophysics Data System (ADS)

    Cho, Eun-Hee; Oh, Sei-Qwon

    2016-07-01

    We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

  8. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

    NASA Astrophysics Data System (ADS)

    Cho, Eun-Hee; Oh, Sei-Qwon

    2016-05-01

    We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

  9. Practical Session: Logistic Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

  10. Electrostatic forces in the Poisson-Boltzmann systems

    PubMed Central

    Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2013-01-01

    Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models. PMID:24028101

  11. Detection of Gaussian signals in Poisson-modulated interference.

    PubMed

    Streit, R L

    2000-10-01

    Passive broadband detection of target signals by an array of hydrophones in the presence of multiple discrete interferers is analyzed under Gaussian statistics and low signal-to-noise ratio conditions. A nonhomogeneous Poisson-modulated interference process is used to model the ensemble of possible arrival directions of the discrete interferers. Closed-form expressions are derived for the recognition differential of the passive-sonar equation in the presence of Poisson-modulated interference. The interference-compensated recognition differential differs from the classical recognition differential by an additive positive term that depend on the interference-to-noise ratio, the directionality of the Poisson-modulated interference, and the array beam pattern. PMID:11051502

  12. A spectral Poisson solver for kinetic plasma simulation

    NASA Astrophysics Data System (ADS)

    Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf

    2011-10-01

    Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

  13. The Poisson-Boltzmann model for tRNA

    PubMed Central

    Gruziel, Magdalena; Grochowski, Pawel; Trylska, Joanna

    2008-01-01

    Using tRNA molecule as an example, we evaluate the applicability of the Poisson-Boltzmann model to highly charged systems such as nucleic acids. Particularly, we describe the effect of explicit crystallographic divalent ions and water molecules, ionic strength of the solvent, and the linear approximation to the Poisson-Boltzmann equation on the electrostatic potential and electrostatic free energy. We calculate and compare typical similarity indices and measures, such as Hodgkin index and root mean square deviation. Finally, we introduce a modification to the nonlinear Poisson-Boltzmann equation, which accounts in a simple way for the finite size of mobile ions, by applying a cutoff in the concentration formula for ionic distribution at regions of high electrostatic potentials. We test the influence of this ionic concentration cutoff on the electrostatic properties of tRNA. PMID:18432617

  14. Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.

    PubMed

    Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N

    2016-08-10

    We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms. PMID:27377708

  15. Explorations in Statistics: Regression

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2011-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…

  16. Modern Regression Discontinuity Analysis

    ERIC Educational Resources Information Center

    Bloom, Howard S.

    2012-01-01

    This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…

  17. CORRELATION AND REGRESSION

    EPA Science Inventory

    Webcast entitled Statistical Tools for Making Sense of Data, by the National Nutrient Criteria Support Center, N-STEPS (Nutrients-Scientific Technical Exchange Partnership. The section "Correlation and Regression" provides an overview of these two techniques in the context of nut...

  18. Multiple linear regression analysis

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  19. Mechanisms of neuroblastoma regression

    PubMed Central

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  20. Bayesian ARTMAP for regression.

    PubMed

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. PMID:23665468

  1. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  2. Distributional properties of the three-dimensional Poisson Delaunay cell

    SciTech Connect

    Muche, L.

    1996-07-01

    This paper gives distributional properties of geometrical characteristics of the Delaunay tessellation generated by a stationary Poisson point process in {Re}{sup 3}. The considerations are based on a well-known formula given by Miles which describes the size and shape of the {open_quotes}typical{close_quotes} three-dimensional Poisson Delaunay cell. The results are the probability density functions for its volume, the area, and the perimeter of one of its faces, the angle spanned in a face by two of its edges, and the length of an edge. These probability density functions are given in integral form. Formulas for higher moments of these characteristics are given explicitly.

  3. A Study of Poisson's Ratio in the Yield Region

    NASA Technical Reports Server (NTRS)

    Gerard, George; Wildhorn, Sorrel

    1952-01-01

    In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.

  4. Distributional properties of the three-dimensional Poisson Delaunay cell

    NASA Astrophysics Data System (ADS)

    Muche, Lutz

    1996-07-01

    This paper gives distributional properties of geometrical characteristics of the Delaunay tessellation generated by a stationary Poisson point process in ℝ3. The considerations are based on a well-known formula given by Miles which describes the size and shape of the "typical" three-dimensional Poisson Delaunay cell. The results are the probability density functions for its volume, the area, and the perimeter of one of its faces, the angle spanned in a face by two of its edges, and the length of an edge. These probability density functions are given in integral form. Formulas for higher moments of these characteristics are given explicitly.

  5. Ridge Regression: A Regression Procedure for Analyzing Correlated Independent Variables.

    ERIC Educational Resources Information Center

    Rakow, Ernest A.

    Ridge regression is presented as an analytic technique to be used when predictor variables in a multiple linear regression situation are highly correlated, a situation which may result in unstable regression coefficients and difficulties in interpretation. Ridge regression avoids the problem of selection of variables that may occur in stepwise…

  6. Ridge Regression Signal Processing

    NASA Technical Reports Server (NTRS)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  7. Fast Censored Linear Regression

    PubMed Central

    HUANG, YIJIAN

    2013-01-01

    Weighted log-rank estimating function has become a standard estimation method for the censored linear regression model, or the accelerated failure time model. Well established statistically, the estimator defined as a consistent root has, however, rather poor computational properties because the estimating function is neither continuous nor, in general, monotone. We propose a computationally efficient estimator through an asymptotics-guided Newton algorithm, in which censored quantile regression methods are tailored to yield an initial consistent estimate and a consistent derivative estimate of the limiting estimating function. We also develop fast interval estimation with a new proposal for sandwich variance estimation. The proposed estimator is asymptotically equivalent to the consistent root estimator and barely distinguishable in samples of practical size. However, computation time is typically reduced by two to three orders of magnitude for point estimation alone. Illustrations with clinical applications are provided. PMID:24347802

  8. Adjustment of lifetime risks of space radiation-induced cancer by the healthy worker effect and cancer misclassification.

    PubMed

    Peterson, Leif E; Kovyrshina, Tatiana

    2015-12-01

    Background. The healthy worker effect (HWE) is a source of bias in occupational studies of mortality among workers caused by use of comparative disease rates based on public data, which include mortality of unhealthy members of the public who are screened out of the workplace. For the US astronaut corp, the HWE is assumed to be strong due to the rigorous medical selection and surveillance. This investigation focused on the effect of correcting for HWE on projected lifetime risk estimates for radiation-induced cancer mortality and incidence. Methods. We performed radiation-induced cancer risk assessment using Poisson regression of cancer mortality and incidence rates among Hiroshima and Nagasaki atomic bomb survivors. Regression coefficients were used for generating risk coefficients for the excess absolute, transfer, and excess relative models. Excess lifetime risks (ELR) for radiation exposure and baseline lifetime risks (BLR) were adjusted for the HWE using standardized mortality ratios (SMR) for aviators and nuclear workers who were occupationally exposed to ionizing radiation. We also adjusted lifetime risks by cancer mortality misclassification among atomic bomb survivors. Results. For all cancers combined ("Nonleukemia"), the effect of adjusting the all-cause hazard rate by the simulated quantiles of the all-cause SMR resulted in a mean difference (not percent difference) in ELR of 0.65% and mean difference of 4% for mortality BLR, and mean change of 6.2% in BLR for incidence. The effect of adjusting the excess (radiation-induced) cancer rate or baseline cancer hazard rate by simulated quantiles of cancer-specific SMRs resulted in a mean difference of [Formula: see text] in the all-cancer mortality ELR and mean difference of [Formula: see text] in the mortality BLR. Whereas for incidence, the effect of adjusting by cancer-specific SMRs resulted in a mean change of [Formula: see text] for the all-cancer BLR. Only cancer mortality risks were adjusted by

  9. Some applications of the fractional Poisson probability distribution

    SciTech Connect

    Laskin, Nick

    2009-11-15

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  10. On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories

    SciTech Connect

    Klimčík, Ctirad

    2015-12-15

    We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.

  11. Application of Poisson random effect models for highway network screening.

    PubMed

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. PMID:24269863

  12. Negative Poisson's Ratio in Single-Layer Graphene Ribbons.

    PubMed

    Jiang, Jin-Wu; Park, Harold S

    2016-04-13

    The Poisson's ratio characterizes the resultant strain in the lateral direction for a material under longitudinal deformation. Though negative Poisson's ratios (NPR) are theoretically possible within continuum elasticity, they are most frequently observed in engineered materials and structures, as they are not intrinsic to many materials. In this work, we report NPR in single-layer graphene ribbons, which results from the compressive edge stress induced warping of the edges. The effect is robust, as the NPR is observed for graphene ribbons with widths smaller than about 10 nm, and for tensile strains smaller than about 0.5% with NPR values reaching as large as -1.51. The NPR is explained analytically using an inclined plate model, which is able to predict the Poisson's ratio for graphene sheets of arbitrary size. The inclined plate model demonstrates that the NPR is governed by the interplay between the width (a bulk property), and the warping amplitude of the edge (an edge property), which eventually yields a phase diagram determining the sign of the Poisson's ratio as a function of the graphene geometry. PMID:26986994

  13. Wide-area traffic: The failure of Poisson modeling

    SciTech Connect

    Paxson, V.; Floyd, S.

    1994-08-01

    Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.

  14. On covariant Poisson brackets in classical field theory

    SciTech Connect

    Forger, Michael; Salles, Mário O.

    2015-10-15

    How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.

  15. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  16. Subsonic Flow for the Multidimensional Euler-Poisson System

    NASA Astrophysics Data System (ADS)

    Bae, Myoungjean; Duan, Ben; Xie, Chunjing

    2016-04-01

    We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.

  17. 3D soft metamaterials with negative Poisson's ratio.

    PubMed

    Babaee, Sahab; Shim, Jongmin; Weaver, James C; Chen, Elizabeth R; Patel, Nikita; Bertoldi, Katia

    2013-09-25

    Buckling is exploited to design a new class of three-dimensional metamaterials with negative Poisson's ratio. A library of auxetic building blocks is identified and procedures are defined to guide their selection and assembly. The auxetic properties of these materials are demonstrated both through experiments and finite element simulations and exhibit excellent qualitative and quantitative agreement. PMID:23878067

  18. Negative poisson's ratio in single-layer black phosphorus.

    PubMed

    Jiang, Jin-Wu; Park, Harold S

    2014-01-01

    The Poisson's ratio is a fundamental mechanical property that relates the resulting lateral strain to applied axial strain. Although this value can theoretically be negative, it is positive for nearly all materials, though negative values have been observed in so-called auxetic structures. However, nearly all auxetic materials are bulk materials whose microstructure has been specifically engineered to generate a negative Poisson's ratio. Here we report using first-principles calculations the existence of a negative Poisson's ratio in a single-layer, two-dimensional material, black phosphorus. In contrast to engineered bulk auxetics, this behaviour is intrinsic for single-layer black phosphorus, and originates from its puckered structure, where the pucker can be regarded as a re-entrant structure that is comprised of two coupled orthogonal hinges. As a result of this atomic structure, a negative Poisson's ratio is observed in the out-of-plane direction under uniaxial deformation in the direction parallel to the pucker. PMID:25131569

  19. Void-containing materials with tailored Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Goussev, Olga A.; Richner, Peter; Rozman, Michael G.; Gusev, Andrei A.

    2000-10-01

    Assuming square, hexagonal, and random packed arrays of nonoverlapping identical parallel cylindrical voids dispersed in an aluminum matrix, we have calculated numerically the concentration dependence of the transverse Poisson's ratios. It was shown that the transverse Poisson's ratio of the hexagonal and random packed arrays approached 1 upon increasing the concentration of voids while the ratio of the square packed array along the principal continuation directions approached 0. Experimental measurements were carried out on rectangular aluminum bricks with identical cylindrical holes drilled in square and hexagonal packed arrays. Experimental results were in good agreement with numerical predictions. We then demonstrated, based on the numerical and experimental results, that by varying the spatial arrangement of the holes and their volume fraction, one can design and manufacture voided materials with a tailored Poisson's ratio between 0 and 1. In practice, those with a high Poisson's ratio, i.e., close to 1, can be used to amplify the lateral responses of the structures while those with a low one, i.e., close to 0, can largely attenuate the lateral responses and can therefore be used in situations where stringent lateral stability is needed.

  20. An Advanced Manipulator For Poisson Series With Numerical Coefficients

    NASA Astrophysics Data System (ADS)

    Biscani, Francesco; Casotto, S.

    2006-06-01

    The availability of an efficient and featureful manipulator for Poisson deries with numerical coefficients is a standard need for celestial mechanicians and has arisen during our work on the analytical development of the Tide-Generating-Potential (TGP). In the harmonic expansion of the TGP the Poisson series appearing in the theories of motion of the celestial bodies are subjected to a wide set of mathematical operations, ranging from simple additions and multiplications to more sophisticated operations on Legendre polynomials and spherical harmonics with Poisson series as arguments. To perform these operations we have developed an algebraic manipulator, called Piranha, structured as an object-oriented multi-platform C++ library. Piranha handles series with real and complex coefficients, and operates with an arbitrary degree of precision. It supports advanced features such as trigonometric operations and the generation of special functions from Poisson series. Piranha is provided with a proof-of-concept, multi-platform GUI, which serves as a testbed and benchmark for the library. We describe Piranha's architecture and characteristics, what it accomplishes currently and how it will be extended in the future (e.g., to handle series with symbolic coefficients in a consistent fashion with its current design).

  1. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  2. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression. PMID:18450049

  3. Incremental hierarchical discriminant regression.

    PubMed

    Weng, Juyang; Hwang, Wey-Shiuan

    2007-03-01

    This paper presents incremental hierarchical discriminant regression (IHDR) which incrementally builds a decision tree or regression tree for very high-dimensional regression or decision spaces by an online, real-time learning system. Biologically motivated, it is an approximate computational model for automatic development of associative cortex, with both bottom-up sensory inputs and top-down motor projections. At each internal node of the IHDR tree, information in the output space is used to automatically derive the local subspace spanned by the most discriminating features. Embedded in the tree is a hierarchical probability distribution model used to prune very unlikely cases during the search. The number of parameters in the coarse-to-fine approximation is dynamic and data-driven, enabling the IHDR tree to automatically fit data with unknown distribution shapes (thus, it is difficult to select the number of parameters up front). The IHDR tree dynamically assigns long-term memory to avoid the loss-of-memory problem typical with a global-fitting learning algorithm for neural networks. A major challenge for an incrementally built tree is that the number of samples varies arbitrarily during the construction process. An incrementally updated probability model, called sample-size-dependent negative-log-likelihood (SDNLL) metric is used to deal with large sample-size cases, small sample-size cases, and unbalanced sample-size cases, measured among different internal nodes of the IHDR tree. We report experimental results for four types of data: synthetic data to visualize the behavior of the algorithms, large face image data, continuous video stream from robot navigation, and publicly available data sets that use human defined features. PMID:17385628

  4. Steganalysis using logistic regression

    NASA Astrophysics Data System (ADS)

    Lubenko, Ivans; Ker, Andrew D.

    2011-02-01

    We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

  5. A new method for dealing with measurement error in explanatory variables of regression models.

    PubMed

    Freedman, Laurence S; Fainberg, Vitaly; Kipnis, Victor; Midthune, Douglas; Carroll, Raymond J

    2004-03-01

    We introduce a new method, moment reconstruction, of correcting for measurement error in covariates in regression models. The central idea is similar to regression calibration in that the values of the covariates that are measured with error are replaced by "adjusted" values. In regression calibration the adjusted value is the expectation of the true value conditional on the measured value. In moment reconstruction the adjusted value is the variance-preserving empirical Bayes estimate of the true value conditional on the outcome variable. The adjusted values thereby have the same first two moments and the same covariance with the outcome variable as the unobserved "true" covariate values. We show that moment reconstruction is equivalent to regression calibration in the case of linear regression, but leads to different results for logistic regression. For case-control studies with logistic regression and covariates that are normally distributed within cases and controls, we show that the resulting estimates of the regression coefficients are consistent. In simulations we demonstrate that for logistic regression, moment reconstruction carries less bias than regression calibration, and for case-control studies is superior in mean-square error to the standard regression calibration approach. Finally, we give an example of the use of moment reconstruction in linear discriminant analysis and a nonstandard problem where we wish to adjust a classification tree for measurement error in the explanatory variables. PMID:15032787

  6. Poisson structures for lifts and periodic reductions of integrable lattice equations

    NASA Astrophysics Data System (ADS)

    Kouloukas, Theodoros E.; Tran, Dinh T.

    2015-02-01

    We introduce and study suitable Poisson structures for four-dimensional maps derived as lifts and specific periodic reductions of integrable lattice equations. These maps are Poisson with respect to these structures and the corresponding integrals are in involution.

  7. A linear regression solution to the spatial autocorrelation problem

    NASA Astrophysics Data System (ADS)

    Griffith, Daniel A.

    The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.

  8. Improved hospital-level risk adjustment for surveillance of healthcare-associated bloodstream infections: a retrospective cohort study

    PubMed Central

    2009-01-01

    Background To allow direct comparison of bloodstream infection (BSI) rates between hospitals for performance measurement, observed rates need to be risk adjusted according to the types of patients cared for by the hospital. However, attribute data on all individual patients are often unavailable and hospital-level risk adjustment needs to be done using indirect indicator variables of patient case mix, such as hospital level. We aimed to identify medical services associated with high or low BSI rates, and to evaluate the services provided by the hospital as indicators that can be used for more objective hospital-level risk adjustment. Methods From February 2001-December 2007, 1719 monthly BSI counts were available from 18 hospitals in Queensland, Australia. BSI outcomes were stratified into four groups: overall BSI (OBSI), Staphylococcus aureus BSI (STAPH), intravascular device-related S. aureus BSI (IVD-STAPH) and methicillin-resistant S. aureus BSI (MRSA). Twelve services were considered as candidate risk-adjustment variables. For OBSI, STAPH and IVD-STAPH, we developed generalized estimating equation Poisson regression models that accounted for autocorrelation in longitudinal counts. Due to a lack of autocorrelation, a standard logistic regression model was specified for MRSA. Results Four risk services were identified for OBSI: AIDS (IRR 2.14, 95% CI 1.20 to 3.82), infectious diseases (IRR 2.72, 95% CI 1.97 to 3.76), oncology (IRR 1.60, 95% CI 1.29 to 1.98) and bone marrow transplants (IRR 1.52, 95% CI 1.14 to 2.03). Four protective services were also found. A similar but smaller group of risk and protective services were found for the other outcomes. Acceptable agreement between observed and fitted values was found for the OBSI and STAPH models but not for the IVD-STAPH and MRSA models. However, the IVD-STAPH and MRSA models successfully discriminated between hospitals with higher and lower BSI rates. Conclusion The high model goodness-of-fit and the higher

  9. Ridge regression processing

    NASA Technical Reports Server (NTRS)

    Kuhl, Mark R.

    1990-01-01

    Current navigation requirements depend on a geometric dilution of precision (GDOP) criterion. As long as the GDOP stays below a specific value, navigation requirements are met. The GDOP will exceed the specified value when the measurement geometry becomes too collinear. A new signal processing technique, called Ridge Regression Processing, can reduce the effects of nearly collinear measurement geometry; thereby reducing the inflation of the measurement errors. It is shown that the Ridge signal processor gives a consistently better mean squared error (MSE) in position than the Ordinary Least Mean Squares (OLS) estimator. The applicability of this technique is currently being investigated to improve the following areas: receiver autonomous integrity monitoring (RAIM), coverage requirements, availability requirements, and precision approaches.

  10. Reference manual for the POISSON/SUPERFISH Group of Codes

    SciTech Connect

    Not Available

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.

  11. Correlation between supercooled liquid relaxation and glass Poisson's ratio.

    PubMed

    Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng

    2015-10-28

    We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition. PMID:26520524

  12. Image deconvolution under Poisson noise using SURE-LET approach

    NASA Astrophysics Data System (ADS)

    Xue, Feng; Liu, Jiaqi; Meng, Gang; Yan, Jing; Zhao, Min

    2015-10-01

    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. By minimizing Stein's unbiased risk estimate (SURE), the SURE-LET method was firstly proposed to deal with Gaussian noise corruption. Our key contribution is to demonstrate that the SURE-LET algorithm is also applicable for Poisson noisy image and proposed an efficient algorithm. The formulation of SURE requires knowledge of Gaussian noise variance. We experimentally found a simple and direct link between the noise variance estimated by median absolute difference (MAD) method and the optimal one that leads to the best deconvolution performance in terms of mean squared error (MSE). Extensive experiments show that this optimal noise variance works satisfactorily for a wide range of natural images.

  13. Quantized Nambu-Poisson manifolds and n-Lie algebras

    SciTech Connect

    DeBellis, Joshua; Saemann, Christian; Szabo, Richard J.

    2010-12-15

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of R{sup n} by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  14. MODELING PAVEMENT DETERIORATION PROCESSES BY POISSON HIDDEN MARKOV MODELS

    NASA Astrophysics Data System (ADS)

    Nam, Le Thanh; Kaito, Kiyoyuki; Kobayashi, Kiyoshi; Okizuka, Ryosuke

    In pavement management, it is important to estimate lifecycle cost, which is composed of the expenses for repairing local damages, including potholes, and repairing and rehabilitating the surface and base layers of pavements, including overlays. In this study, a model is produced under the assumption that the deterioration process of pavement is a complex one that includes local damages, which occur frequently, and the deterioration of the surface and base layers of pavement, which progresses slowly. The variation in pavement soundness is expressed by the Markov deterioration model and the Poisson hidden Markov deterioration model, in which the frequency of local damage depends on the distribution of pavement soundness, is formulated. In addition, the authors suggest a model estimation method using the Markov Chain Monte Carlo (MCMC) method, and attempt to demonstrate the applicability of the proposed Poisson hidden Markov deterioration model by studying concrete application cases.

  15. Correlation between supercooled liquid relaxation and glass Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng

    2015-10-01

    We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition.

  16. Invariants and labels for Lie-Poisson Systems

    SciTech Connect

    Thiffeault, J.L.; Morrison, P.J.

    1998-04-01

    Reduction is a process that uses symmetry to lower the order of a Hamiltonian system. The new variables in the reduced picture are often not canonical: there are no clear variables representing positions and momenta, and the Poisson bracket obtained is not of the canonical type. Specifically, we give two examples that give rise to brackets of the noncanonical Lie-Poisson form: the rigid body and the two-dimensional ideal fluid. From these simple cases, we then use the semidirect product extension of algebras to describe more complex physical systems. The Casimir invariants in these systems are examined, and some are shown to be linked to the recovery of information about the configuration of the system. We discuss a case in which the extension is not a semidirect product, namely compressible reduced MHD, and find for this case that the Casimir invariants lend partial information about the configuration of the system.

  17. Finite-size effects and percolation properties of Poisson geometries.

    PubMed

    Larmier, C; Dumonteil, E; Malvagi, F; Mazzolo, A; Zoia, A

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d=3. We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size. PMID:27575099

  18. Improved Poisson solver for cfa/magnetron simulation

    SciTech Connect

    Dombrowski, G.E.

    1996-12-31

    E{sub dc}, the static field of a device having vane-shaped anodes, has been determined by application of Hockney`s method, which in turn uses Buneman`s cyclic reduction. This result can be used for both cfa and magnetrons, but does not solve the general space-charge fields. As pointed out by Hockney, the matrix of coupling capacitive factors between the vane-defining mesh points can also be used to solve the Poisson equation for the entire cathode-anode domain. Space-charge fields of electrons between anode electrodes can now be determined. This technique also computes the Ramo function for the entire region. This method has been applied to the magnetron. Extension to the cfa with many different space-charge bunches does not appear to be practicable. Calculations for the type 4J50 magnetron by the various degrees of accuracy in solving the Poisson equation are compared with experimental measurements.

  19. Quantized Nambu-Poisson manifolds and n-Lie algebras

    NASA Astrophysics Data System (ADS)

    DeBellis, Joshua; Sämann, Christian; Szabo, Richard J.

    2010-12-01

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of {{R}}^n by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  20. Intrinsic Negative Poisson's Ratio for Single-Layer Graphene.

    PubMed

    Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming; Park, Harold S

    2016-08-10

    Negative Poisson's ratio (NPR) materials have drawn significant interest because the enhanced toughness, shear resistance, and vibration absorption that typically are seen in auxetic materials may enable a range of novel applications. In this work, we report that single-layer graphene exhibits an intrinsic NPR, which is robust and independent of its size and temperature. The NPR arises due to the interplay between two intrinsic deformation pathways (one with positive Poisson's ratio, the other with NPR), which correspond to the bond stretching and angle bending interactions in graphene. We propose an energy-based deformation pathway criteria, which predicts that the pathway with NPR has lower energy and thus becomes the dominant deformation mode when graphene is stretched by a strain above 6%, resulting in the NPR phenomenon. PMID:27408994

  1. Self-Attracting Poisson Clouds in an Expanding Universe

    NASA Astrophysics Data System (ADS)

    Bertoin, Jean

    We consider the following elementary model for clustering by ballistic aggregation in an expanding universe. At the initial time, there is a doubly infinite sequence of particles lying in a one-dimensional universe that is expanding at constant rate. We suppose that each particle p attracts points at a certain rate a(p)/2 depending only on p, and when two particles, say p and q, collide by the effect of attraction, they merge as a single particle p*q. The main purpose of this work is to point at the following remarkable property of Poisson clouds in these dynamics. Under certain technical conditions, if at the initial time the system is distributed according to a spatially stationary Poisson cloud with intensity μ0, then at any time t > 0, the system will again have a Poissonian distribution, now with intensity μt, where the family solves a generalization of Smoluchowski's coagulation equation.

  2. A comparison between simulation and poisson-boltzmann fields

    NASA Astrophysics Data System (ADS)

    Pettitt, B. Montgomery; Valdeavella, C. V.

    1999-11-01

    The electrostatic potentials from molecular dynamics (MD) trajectories and Poisson-Boltzmann calculations on a tetra peptide are compared to understand the validity of the resulting free energy surface. The Tuftsin peptide with sequence, Thr-Lys-Pro-Arg, in water is used for the comparison. The results obtained from the analysis of the MD trajectories for the total electrostatic potential at points on a grid using the Ewald technique are compared with the solution to the Poisson-Boltzmann (PB) equation averaged over the same set of configurations. The latter was solved using an optimal set of dielectric constant parameters. Structural averaging of the field over the MD simulation was examined in the context of the PB results. The detailed spatial variation of the electrostatic potential on the molecular surface are not qualitatively reproducible from MD to PB. Implications of using such field calculations and the implied free energies are discussed.

  3. New method for blowup of the Euler-Poisson system

    NASA Astrophysics Data System (ADS)

    Kwong, Man Kam; Yuen, Manwai

    2016-08-01

    In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.

  4. Finite-size effects and percolation properties of Poisson geometries

    NASA Astrophysics Data System (ADS)

    Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.

  5. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    SciTech Connect

    Sun Wei; Zeng Yong; Zhang Shu

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical scheme based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.

  6. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  7. Computing measures of explained variation for logistic regression models.

    PubMed

    Mittlböck, M; Schemper, M

    1999-01-01

    The proportion of explained variation (R2) is frequently used in the general linear model but in logistic regression no standard definition of R2 exists. We present a SAS macro which calculates two R2-measures based on Pearson and on deviance residuals for logistic regression. Also, adjusted versions for both measures are given, which should prevent the inflation of R2 in small samples. PMID:10195643

  8. Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code

    SciTech Connect

    Bowman, Kimiko o; Shenton, LR

    2006-01-01

    The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.

  9. Events in time: Basic analysis of Poisson data

    SciTech Connect

    Engelhardt, M.E.

    1994-09-01

    The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.

  10. Studying Resist Stochastics with the Multivariate Poisson Propagation Model

    DOE PAGESBeta

    Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; Bhattarai, Suchit; Neureuther, Andrew

    2014-01-01

    Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.

  11. On third Poisson structure of KdV equation

    SciTech Connect

    Gorsky, A.; Marshakov, A.; Orlov, A.

    1995-12-01

    The third Poisson structure of the KdV equation in terms of canonical {open_quote}free fields{close_quote} and the reduced WZNW model is discussed. We prove that it is {open_quotes}diagonalized{close_quotes} in the Lagrange variables which were used before in the formulation of 2d gravity. We propose a quantum path integral for the KdV equation based on this representation.

  12. Poisson reduction for nonholonomic mechanical systems with symmetry

    NASA Astrophysics Data System (ADS)

    Wang Sang Koon; Marsden, Jerrold E.

    1998-10-01

    This paper continues the work of Koon and Marsden [10] that began the comparison of the Hamiltonian and Lagrangian formulations of nonholonomic systems. Because of the necessary replacement of conservation laws with the momentum equation, it is natural to let the value of momentum be a variable and for this reason it is natural to take a Poisson viewpoint. Some of this theory has been started in van der Schaft and Maschke [24]. We build on their work, further develop the theory of nonholonomic Poisson reduction, and tie this theory to other work in the area. We use this reduction procedure to organize nonholonomic dynamics into a reconstruction equation, a nonholonomic momentum equation and the reduced Lagrange-d'Alembert equations in Hamiltonian form. We also show that these equations are equivalent to those given by the Lagrangian reduction methods of Bloch, Krishnaprasad, Marsden and Murray [4]. Because of the results of Koon and Marsden [10], this is also equivalent to the results of Bates and Śniatycki [2], obtained by nonholonomic symplectic reduction. Two interesting complications make this effort especially interesting. First of all, as we have mentioned, symmetry need not lead to conservation laws but rather to a momentum equation. Second, the natural Poisson bracket fails to satisfy the Jacobi identity. In fact, the so-called Jacobiizer (the cyclic sum that vanishes when the Jacobi identity holds), or equivalently, the Schouten bracket, is an interesting expression involving the curvature of the underlying distribution describing the nonholonomic constraints. The Poisson reduction results in this paper are important for the future development of the stability theory for nonholonomic mechanical systems with symmetry, as begun by Zenkov, Bloch and Marsden [25]. In particular, they should be useful for the development of the powerful block diagonalization properties of the energy-momentum method developed by Simo, Lewis and Marsden [23].

  13. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    PubMed

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. PMID:24036167

  14. Numerical methods for the Poisson-Fermi equation in electrolytes

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang

    2013-08-01

    The Poisson-Fermi equation proposed by Bazant, Storey, and Kornyshev [Phys. Rev. Lett. 106 (2011) 046102] for ionic liquids is applied to and numerically studied for electrolytes and biological ion channels in three-dimensional space. This is a fourth-order nonlinear PDE that deals with both steric and correlation effects of all ions and solvent molecules involved in a model system. The Fermi distribution follows from classical lattice models of configurational entropy of finite size ions and solvent molecules and hence prevents the long and outstanding problem of unphysical divergence predicted by the Gouy-Chapman model at large potentials due to the Boltzmann distribution of point charges. The equation reduces to Poisson-Boltzmann if the correlation length vanishes. A simplified matched interface and boundary method exhibiting optimal convergence is first developed for this equation by using a gramicidin A channel model that illustrates challenging issues associated with the geometric singularities of molecular surfaces of channel proteins in realistic 3D simulations. Various numerical methods then follow to tackle a range of numerical problems concerning the fourth-order term, nonlinearity, stability, efficiency, and effectiveness. The most significant feature of the Poisson-Fermi equation, namely, its inclusion of steric and correlation effects, is demonstrated by showing good agreement with Monte Carlo simulation data for a charged wall model and an L type calcium channel model.

  15. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  16. Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Wang, Jun; Luo, Ray

    2009-01-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  17. Assessment of linear finite-difference Poisson-Boltzmann solvers.

    PubMed

    Wang, Jun; Luo, Ray

    2010-06-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study, we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  18. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  19. A generalized Poisson solver for first-principles device simulations

    NASA Astrophysics Data System (ADS)

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-01

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

  20. Poisson-like spiking in circuits with probabilistic synapses.

    PubMed

    Moreno-Bote, Rubén

    2014-07-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  1. A generalized Poisson solver for first-principles device simulations.

    PubMed

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. PMID:26827208

  2. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  3. Novel negative Poisson's ratio behavior induced by an elastic instability

    NASA Astrophysics Data System (ADS)

    Bertoldi, Katia; Reis, Pedro; Willshaw, Stephen; Mullin, Tom

    2010-03-01

    When materials are compressed along a particular axis they are most commonly observed to expand in directions orthogonal to the applied load. The property that characterizes this behavior is the Poisson's ratio which is defined as the ratio between the negative transverse and longitudinal strains. Materials with a negative Poisson's ratio will contract in the transverse direction when compressed and demonstration of practical examples is relatively recent. A significant challenge in the fabrication of auxetic materials is that it usually involves embedding structures with intricate geometries within a host matrix. As such, the manufacturing process has been a bottleneck in the practical development towards applications. Here we exploit elastic instabilities to create novel effects within materials with periodic microstructure and we show that they may lead to negative Poisson's ratio behavior for the 2D periodic structures i.e. it only occurs under compression. The uncomplicated manufacturing process of the samples together with the robustness of the observed phenomena suggests that this may form the basis of a practical method for constructing planar auxetic materials over a wide range of length-scales.

  4. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy. PMID:20840902

  5. Recursive Algorithm For Linear Regression

    NASA Technical Reports Server (NTRS)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  6. ADJUSTABLE DOUBLE PULSE GENERATOR

    DOEpatents

    Gratian, J.W.; Gratian, A.C.

    1961-08-01

    >A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)

  7. Adjustable sutures in children.

    PubMed

    Engel, J Mark; Guyton, David L; Hunter, David G

    2014-06-01

    Although adjustable sutures are considered a standard technique in adult strabismus surgery, most surgeons are hesitant to attempt the technique in children, who are believed to be unlikely to cooperate for postoperative assessment and adjustment. Interest in using adjustable sutures in pediatric patients has increased with the development of surgical techniques specific to infants and children. This workshop briefly reviews the literature supporting the use of adjustable sutures in children and presents the approaches currently used by three experienced strabismus surgeons. PMID:24924284

  8. Auxetic materials with large negative Poisson's ratios based on highly oriented carbon nanotube structures

    NASA Astrophysics Data System (ADS)

    Chen, Luzhuo; Liu, Changhong; Wang, Jiaping; Zhang, Wei; Hu, Chunhua; Fan, Shoushan

    2009-06-01

    Auxetic materials with large negative Poisson's ratios are fabricated by highly oriented carbon nanotube structures. The Poisson's ratio can be obtained down to -0.50. Furthermore, negative Poisson's ratios can be maintained in the carbon nanotube/polymer composites when the nanotubes are embedded, while the composites show much better mechanical properties including larger strain-to-failure (˜22%) compared to the pristine nanotube thin film (˜3%). A theoretical model is developed to predict the Poisson's ratios. It indicates that the large negative Poisson's ratios are caused by the realignment of curved nanotubes during stretching and the theoretical predictions agree well with the experimental results.

  9. Multinomial logistic regression ensembles.

    PubMed

    Lee, Kyewon; Ahn, Hongshik; Moon, Hojin; Kodell, Ralph L; Chen, James J

    2013-05-01

    This article proposes a method for multiclass classification problems using ensembles of multinomial logistic regression models. A multinomial logit model is used as a base classifier in ensembles from random partitions of predictors. The multinomial logit model can be applied to each mutually exclusive subset of the feature space without variable selection. By combining multiple models the proposed method can handle a huge database without a constraint needed for analyzing high-dimensional data, and the random partition can improve the prediction accuracy by reducing the correlation among base classifiers. The proposed method is implemented using R, and the performance including overall prediction accuracy, sensitivity, and specificity for each category is evaluated on two real data sets and simulation data sets. To investigate the quality of prediction in terms of sensitivity and specificity, the area under the receiver operating characteristic (ROC) curve (AUC) is also examined. The performance of the proposed model is compared to a single multinomial logit model and it shows a substantial improvement in overall prediction accuracy. The proposed method is also compared with other classification methods such as the random forest, support vector machines, and random multinomial logit model. PMID:23611203

  10. Bayesian Spatial Quantile Regression

    PubMed Central

    Reich, Brian J.; Fuentes, Montserrat; Dunson, David B.

    2013-01-01

    Tropospheric ozone is one of the six criteria pollutants regulated by the United States Environmental Protection Agency under the Clean Air Act and has been linked with several adverse health effects, including mortality. Due to the strong dependence on weather conditions, ozone may be sensitive to climate change and there is great interest in studying the potential effect of climate change on ozone, and how this change may affect public health. In this paper we develop a Bayesian spatial model to predict ozone under different meteorological conditions, and use this model to study spatial and temporal trends and to forecast ozone concentrations under different climate scenarios. We develop a spatial quantile regression model that does not assume normality and allows the covariates to affect the entire conditional distribution, rather than just the mean. The conditional distribution is allowed to vary from site-to-site and is smoothed with a spatial prior. For extremely large datasets our model is computationally infeasible, and we develop an approximate method. We apply the approximate version of our model to summer ozone from 1997–2005 in the Eastern U.S., and use deterministic climate models to project ozone under future climate conditions. Our analysis suggests that holding all other factors fixed, an increase in daily average temperature will lead to the largest increase in ozone in the Industrial Midwest and Northeast. PMID:23459794

  11. Bayesian Spatial Quantile Regression.

    PubMed

    Reich, Brian J; Fuentes, Montserrat; Dunson, David B

    2011-03-01

    Tropospheric ozone is one of the six criteria pollutants regulated by the United States Environmental Protection Agency under the Clean Air Act and has been linked with several adverse health effects, including mortality. Due to the strong dependence on weather conditions, ozone may be sensitive to climate change and there is great interest in studying the potential effect of climate change on ozone, and how this change may affect public health. In this paper we develop a Bayesian spatial model to predict ozone under different meteorological conditions, and use this model to study spatial and temporal trends and to forecast ozone concentrations under different climate scenarios. We develop a spatial quantile regression model that does not assume normality and allows the covariates to affect the entire conditional distribution, rather than just the mean. The conditional distribution is allowed to vary from site-to-site and is smoothed with a spatial prior. For extremely large datasets our model is computationally infeasible, and we develop an approximate method. We apply the approximate version of our model to summer ozone from 1997-2005 in the Eastern U.S., and use deterministic climate models to project ozone under future climate conditions. Our analysis suggests that holding all other factors fixed, an increase in daily average temperature will lead to the largest increase in ozone in the Industrial Midwest and Northeast. PMID:23459794

  12. Canonical variate regression.

    PubMed

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. PMID:26861909

  13. Counting people with low-level features and Bayesian regression.

    PubMed

    Chan, Antoni B; Vasconcelos, Nuno

    2012-04-01

    An approach to the problem of estimating the size of inhomogeneous crowds, which are composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking is proposed. Instead, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic-texture motion model. A set of holistic low-level features is extracted from each segmented region, and a function that maps features into estimates of the number of people per segment is learned with Bayesian regression. Two Bayesian regression models are examined. The first is a combination of Gaussian process regression with a compound kernel, which accounts for both the global and local trends of the count mapping but is limited by the real-valued outputs that do not match the discrete counts. We address this limitation with a second model, which is based on a Bayesian treatment of Poisson regression that introduces a prior distribution on the linear weights of the model. Since exact inference is analytically intractable, a closed-form approximation is derived that is computationally efficient and kernelizable, enabling the representation of nonlinear functions. An approximate marginal likelihood is also derived for kernel hyperparameter learning. The two regression-based crowd counting methods are evaluated on a large pedestrian data set, containing very distinct camera views, pedestrian traffic, and outliers, such as bikes or skateboarders. Experimental results show that regression-based counts are accurate regardless of the crowd size, outperforming the count estimates produced by state-of-the-art pedestrian detectors. Results on 2 h of video demonstrate the efficiency and robustness of the regression-based crowd size estimation over long periods of time. PMID:22020684

  14. Linear regression in astronomy. I

    NASA Technical Reports Server (NTRS)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  15. Psychological Adjustment in Young Korean American Adolescents and Parental Warmth

    PubMed Central

    Kim, Eunjung

    2008-01-01

    Problem: The relation between parental warmth and psychological adjustment is not known for young Korean American adolescents. Methods: 103 adolescents' perceived parental warmth and psychological adjustment were assessed using, respectively, the Parental Acceptance-Rejection Questionnaire and the Child Personality Assessment Questionnaire. Findings: Low perceived maternal and paternal warmth were positively related to adolescents' overall poor psychological adjustment and almost all of its attributes. When maternal and paternal warmth were entered simultaneously into the regression equation, only low maternal warmth was related to adolescents' poor psychological adjustment. Conclusion: Perceived parental warmth is important in predicting young adolescents' psychological adjustment as suggested in the parental acceptance-rejection theory. PMID:19885379

  16. Risk-adjusted monitoring of survival times

    SciTech Connect

    Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.

    2009-02-26

    We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.

  17. Using regression models to determine the poroelastic properties of cartilage.

    PubMed

    Chung, Chen-Yuan; Mansour, Joseph M

    2013-07-26

    The feasibility of determining biphasic material properties using regression models was investigated. A transversely isotropic poroelastic finite element model of stress relaxation was developed and validated against known results. This model was then used to simulate load intensity for a wide range of material properties. Linear regression equations for load intensity as a function of the five independent material properties were then developed for nine time points (131, 205, 304, 390, 500, 619, 700, 800, and 1000s) during relaxation. These equations illustrate the effect of individual material property on the stress in the time history. The equations at the first four time points, as well as one at a later time (five equations) could be solved for the five unknown material properties given computed values of the load intensity. Results showed that four of the five material properties could be estimated from the regression equations to within 9% of the values used in simulation if time points up to 1000s are included in the set of equations. However, reasonable estimates of the out of plane Poisson's ratio could not be found. Although all regression equations depended on permeability, suggesting that true equilibrium was not realized at 1000s of simulation, it was possible to estimate material properties to within 10% of the expected values using equations that included data up to 800s. This suggests that credible estimates of most material properties can be obtained from tests that are not run to equilibrium, which is typically several thousand seconds. PMID:23796400

  18. An Implementation of Bayesian Adaptive Regression Splines (BARS) in C with S and R Wrappers.

    PubMed

    Wallstrom, Garrick; Liebner, Jeffrey; Kass, Robert E

    2008-06-01

    BARS (DiMatteo, Genovese, and Kass 2001) uses the powerful reversible-jump MCMC engine to perform spline-based generalized nonparametric regression. It has been shown to work well in terms of having small mean-squared error in many examples (smaller than known competitors), as well as producing visually-appealing fits that are smooth (filtering out high-frequency noise) while adapting to sudden changes (retaining high-frequency signal). However, BARS is computationally intensive. The original implementation in S was too slow to be practical in certain situations, and was found to handle some data sets incorrectly. We have implemented BARS in C for the normal and Poisson cases, the latter being important in neurophysiological and other point-process applications. The C implementation includes all needed subroutines for fitting Poisson regression, manipulating B-splines (using code created by Bates and Venables), and finding starting values for Poisson regression (using code for density estimation created by Kooperberg). The code utilizes only freely-available external libraries (LAPACK and BLAS) and is otherwise self-contained. We have also provided wrappers so that BARS can be used easily within S or R. PMID:19777145

  19. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  20. Polarizable Atomic Multipole Solutes in a Poisson-Boltzmann Continuum

    PubMed Central

    Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.

    2008-01-01

    Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager used vacuum properties of small molecules, including polarizability, dipole moment and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation (PBE). Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here we describe the theory underlying a newly developed Polarizable Multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150 mM salt lowered the electrostatic solvation energy between 2–13 kcal/mole, depending on the formal charge of the protein, but had only a

  1. Brain, music, and non-Poisson renewal processes

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo

    2007-06-01

    In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.

  2. On population size estimators in the Poisson mixture model.

    PubMed

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. PMID:23865502

  3. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  4. Poisson's Ratio and the Densification of Glass under High Pressure

    SciTech Connect

    Rouxel, T.; Ji, H.; Hammouda, T.; Moreac, A.

    2008-06-06

    Because of a relatively low atomic packing density, (C{sub g}) glasses experience significant densification under high hydrostatic pressure. Poisson's ratio ({nu}) is correlated to C{sub g} and typically varies from 0.15 for glasses with low C{sub g} such as amorphous silica to 0.38 for close-packed atomic networks such as in bulk metallic glasses. Pressure experiments were conducted up to 25 GPa at 293 K on silica, soda-lime-silica, chalcogenide, and bulk metallic glasses. We show from these high-pressure data that there is a direct correlation between {nu} and the maximum post-decompression density change.

  5. Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.

    PubMed

    Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2013-01-01

    Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions. PMID:23439886

  6. Fission meter and neutron detection using poisson distribution comparison

    SciTech Connect

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  7. Theory of multicolor lattice gas - A cellular automaton Poisson solver

    NASA Technical Reports Server (NTRS)

    Chen, H.; Matthaeus, W. H.; Klein, L. W.

    1990-01-01

    The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

  8. The Poisson equation at second order in relativistic cosmology

    SciTech Connect

    Hidalgo, J.C.; Christopherson, Adam J.; Malik, Karim A. E-mail: Adam.Christopherson@nottingham.ac.uk

    2013-08-01

    We calculate the relativistic constraint equation which relates the curvature perturbation to the matter density contrast at second order in cosmological perturbation theory. This relativistic ''second order Poisson equation'' is presented in a gauge where the hydrodynamical inhomogeneities coincide with their Newtonian counterparts exactly for a perfect fluid with constant equation of state. We use this constraint to introduce primordial non-Gaussianity in the density contrast in the framework of General Relativity. We then derive expressions that can be used as the initial conditions of N-body codes for structure formation which probe the observable signature of primordial non-Gaussianity in the statistics of the evolved matter density field.

  9. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  10. Quantile regression for climate data

    NASA Astrophysics Data System (ADS)

    Marasinghe, Dilhani Shalika

    Quantile regression is a developing statistical tool which is used to explain the relationship between response and predictor variables. This thesis describes two examples of climatology using quantile regression.Our main goal is to estimate derivatives of a conditional mean and/or conditional quantile function. We introduce a method to handle autocorrelation in the framework of quantile regression and used it with the temperature data. Also we explain some properties of the tornado data which is non-normally distributed. Even though quantile regression provides a more comprehensive view, when talking about residuals with the normality and the constant variance assumption, we would prefer least square regression for our temperature analysis. When dealing with the non-normality and non constant variance assumption, quantile regression is a better candidate for the estimation of the derivative.

  11. Some Poisson structures and Lax equations associated with the Toeplitz lattice and the Schur lattice

    NASA Astrophysics Data System (ADS)

    Lemarie, Caroline

    2016-01-01

    The Toeplitz lattice is a Hamiltonian system whose Poisson structure is known. In this paper, we unveil the origins of this Poisson structure and derive from it the associated Lax equations for this lattice. We first construct a Poisson subvariety H n of GL n (C), which we view as a real or complex Poisson-Lie group whose Poisson structure comes from a quadratic R-bracket on gl n (C) for a fixed R-matrix. The existence of Hamiltonians, associated to the Toeplitz lattice for the Poisson structure on H n , combined with the properties of the quadratic R-bracket allow us to give explicit formulas for the Lax equation. Then we derive from it the integrability in the sense of Liouville of the Toeplitz lattice. When we view the lattice as being defined over R, we can construct a Poisson subvariety H n τ of U n which is itself a Poisson-Dirac subvariety of GL n R (C). We then construct a Hamiltonian for the Poisson structure induced on H n τ , corresponding to another system which derives from the Toeplitz lattice the modified Schur lattice. Thanks to the properties of Poisson-Dirac subvarieties, we give an explicit Lax equation for the new system and derive from it a Lax equation for the Schur lattice. We also deduce the integrability in the sense of Liouville of the modified Schur lattice.

  12. Transfer Learning Based on Logistic Regression

    NASA Astrophysics Data System (ADS)

    Paul, A.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.

  13. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  14. Retro-regression--another important multivariate regression improvement.

    PubMed

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA. PMID:11410035

  15. Controlling Type I Error Rates in Assessing DIF for Logistic Regression Method Combined with SIBTEST Regression Correction Procedure and DIF-Free-Then-DIF Strategy

    ERIC Educational Resources Information Center

    Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung

    2014-01-01

    The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…

  16. Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model

    NASA Astrophysics Data System (ADS)

    Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.

    2014-03-01

    Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.

  17. A Tubular Biomaterial Construct Exhibiting a Negative Poisson's Ratio.

    PubMed

    Lee, Jin Woo; Soman, Pranav; Park, Jeong Hun; Chen, Shaochen; Cho, Dong-Woo

    2016-01-01

    Developing functional small-diameter vascular grafts is an important objective in tissue engineering research. In this study, we address the problem of compliance mismatch by designing and developing a 3D tubular construct that has a negative Poisson's ratio νxy (NPR). NPR constructs have the unique ability to expand transversely when pulled axially, thereby resulting in a highly-compliant tubular construct. In this work, we used projection stereolithography to 3D-print a planar NPR sheet composed of photosensitive poly(ethylene) glycol diacrylate biomaterial. We used a step-lithography exposure and a stitch process to scale up the projection printing process, and used the cut-missing rib unit design to develop a centimeter-scale NPR sheet, which was rolled up to form a tubular construct. The constructs had Poisson's ratios of -0.6 ≤ νxy ≤ -0.1. The NPR construct also supports higher cellular adhesion than does the construct that has positive νxy. Our NPR design offers a significant advance in the development of highly-compliant vascular grafts. PMID:27232181

  18. Assessent of elliptic solvers for the pressure Poisson equation

    NASA Astrophysics Data System (ADS)

    Strodtbeck, J. P.; Polly, J. B.; McDonough, J. M.

    2008-11-01

    It is well known that as much as 80% of the total arithmetic needed for a solution of the incompressible Navier--Stokes equations can be expended for solving the pressure Poisson equation, and this has long been one of the prime motivations for study of elliptic solvers. In recent years various Krylov-subspace methods have begun to receive wide use because of their rapid convergence rates and automatic generation of iteration parameters. However, it is actually total floating-point arithmetic operations that must be of concern when selecting a solver for CFD, and not simply required number of iterations. In the present study we recast speed of convergence for typical CFD pressure Poisson problems in terms of CPU time spent on floating-point arithmetic and demonstrate that in many cases simple successive-overrelaxation (SOR) methods are as effective as some of the popular Krylov-subspace techniques such as BiCGStab(l) provided optimal SOR iteration parameters are employed; furthermore, SOR procedures require significantly less memory. We then describe some techniques for automatically predicting optimal SOR parameters.

  19. The multisensor PHD filter: II. Erroneous solution via Poisson magic

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald

    2009-05-01

    The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete. It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate that it is these claims which are erroneous. The companion paper introduces formulas for the actual "general multisensor intensity filter." In this paper I demonstrate that (1) the "general multisensor intensity filter" fails in important special cases; (2) it will perform badly in even the easiest multitarget tracking problems; and (3) these rather serious missteps suggest that the "Poisson-intensity approach" is inherently faulty.

  20. Poisson's equation solution of Coulomb integrals in atoms and molecules

    NASA Astrophysics Data System (ADS)

    Weatherford, Charles A.; Red, Eddie; Joseph, Dwayne; Hoggan, Philip

    The integral bottleneck in evaluating molecular energies arises from the two-electron contributions. These are difficult and time-consuming to evaluate, especially over exponential type orbitals, used here to ensure the correct behaviour of atomic orbitals. In this work, it is shown that the two-centre Coulomb integrals involved can be expressed as one-electron kinetic-energy-like integrals. This is accomplished using the fact that the Coulomb operator is a Green's function of the Laplacian. The ensuing integrals may be further simplified by defining Coulomb forms for the one-electron potential satisfying Poisson's equation therein. A sum of overlap integrals with the atomic orbital energy eigenvalue as a factor is then obtained to give the Coulomb energy. The remaining questions of translating orbitals involved in three and four centre integrals and the evaluation of exchange energy are also briefly discussed. The summation coefficients in Coulomb forms are evaluated using the LU decomposition. This algorithm is highly parallel. The Poisson method may be used to calculate Coulomb energy integrals efficiently. For a single processor, gains of CPU time for a given chemical accuracy exceed a factor of 40. This method lends itself to evaluation on a parallel computer.

  1. Examining the spatially non-stationary associations between the second demographic transition and infant mortality: A Poisson GWR approach

    PubMed Central

    Yang, Tse-Chuan; Shoff, Carla; Matthews, Stephen A.

    2014-01-01

    Based on ecological studies, second demographic transition (SDT) theorists concluded that some areas in the US were in vanguard of the SDT compared to others, implying spatial nonstationarity may be inherent in the SDT process. Linking the SDT to the infant mortality literature, we sought out to answer two related questions: Are the main components of the SDT, specifically marriage postponement, cohabitation, and divorce, associated with infant mortality? If yes, do these associations vary across the US? We applied global Poisson and geographically weighted Poisson regression (GWPR) models, a place-specific analytic approach, to county-level data in the contiguous US. After accounting for the racial/ethnic and socioeconomic compositions of counties and prenatal care utilization, we found (1) marriage postponement was negatively related to infant mortality in the southwestern states, but positively associated with infant mortality in parts of Indiana, Kentucky, and Tennessee, (2) cohabitation rates were positively related to infant mortality, and this relationship was stronger in California, coastal Virginia, and the Carolinas than other areas, and (3) a positive association between divorce rates and infant mortality in southwestern and northeastern areas of the US. These spatial patterns suggested that the associations between the SDT and infant mortality were stronger in the areas in vanguard of the SDT than in others. The comparison between global Poisson and GWPR results indicated that a place-specific spatial analysis not only fit the data better, but also provided insights into understanding the non-stationarity of the associations between the SDT and infant mortality. PMID:25383259

  2. Examining the spatially non-stationary associations between the second demographic transition and infant mortality: A Poisson GWR approach.

    PubMed

    Yang, Tse-Chuan; Shoff, Carla; Matthews, Stephen A

    2013-01-01

    Based on ecological studies, second demographic transition (SDT) theorists concluded that some areas in the US were in vanguard of the SDT compared to others, implying spatial nonstationarity may be inherent in the SDT process. Linking the SDT to the infant mortality literature, we sought out to answer two related questions: Are the main components of the SDT, specifically marriage postponement, cohabitation, and divorce, associated with infant mortality? If yes, do these associations vary across the US? We applied global Poisson and geographically weighted Poisson regression (GWPR) models, a place-specific analytic approach, to county-level data in the contiguous US. After accounting for the racial/ethnic and socioeconomic compositions of counties and prenatal care utilization, we found (1) marriage postponement was negatively related to infant mortality in the southwestern states, but positively associated with infant mortality in parts of Indiana, Kentucky, and Tennessee, (2) cohabitation rates were positively related to infant mortality, and this relationship was stronger in California, coastal Virginia, and the Carolinas than other areas, and (3) a positive association between divorce rates and infant mortality in southwestern and northeastern areas of the US. These spatial patterns suggested that the associations between the SDT and infant mortality were stronger in the areas in vanguard of the SDT than in others. The comparison between global Poisson and GWPR results indicated that a place-specific spatial analysis not only fit the data better, but also provided insights into understanding the non-stationarity of the associations between the SDT and infant mortality. PMID:25383259

  3. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  4. Ecological Regression and Voting Rights.

    ERIC Educational Resources Information Center

    Freedman, David A.; And Others

    1991-01-01

    The use of ecological regression in voting rights cases is discussed in the context of a lawsuit against Los Angeles County (California) in 1990. Ecological regression assumes that systematic voting differences between precincts are explained by ethnic differences. An alternative neighborhood model is shown to lead to different conclusions. (SLD)

  5. Logistic Regression: Concept and Application

    ERIC Educational Resources Information Center

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  6. Adjusting the Chain Gear

    NASA Astrophysics Data System (ADS)

    Koloc, Z.; Korf, J.; Kavan, P.

    The adjustment (modification) deals with gear chains intermediating (transmitting) motion transfer between the sprocket wheels on parallel shafts. The purpose of the adjustments of chain gear is to remove the unwanted effects by using the chain guide on the links (sliding guide rail) ensuring a smooth fit of the chain rollers into the wheel tooth gap.

  7. Adjustment to Recruit Training.

    ERIC Educational Resources Information Center

    Anderson, Betty S.

    The thesis examines problems of adjustment encountered by new recruits entering the military services. Factors affecting adjustment are discussed: the recruit training staff and environment, recruit background characteristics, the military's image, the changing values and motivations of today's youth, and the recruiting process. Sources of…

  8. Fungible weights in logistic regression.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record PMID:26651981

  9. [Regression grading in gastrointestinal tumors].

    PubMed

    Tischoff, I; Tannapfel, A

    2012-02-01

    Preoperative neoadjuvant chemoradiation therapy is a well-established and essential part of the interdisciplinary treatment of gastrointestinal tumors. Neoadjuvant treatment leads to regressive changes in tumors. To evaluate the histological tumor response different scoring systems describing regressive changes are used and known as tumor regression grading. Tumor regression grading is usually based on the presence of residual vital tumor cells in proportion to the total tumor size. Currently, no nationally or internationally accepted grading systems exist. In general, common guidelines should be used in the pathohistological diagnostics of tumors after neoadjuvant therapy. In particularly, the standard tumor grading will be replaced by tumor regression grading. Furthermore, tumors after neoadjuvant treatment are marked with the prefix "y" in the TNM classification. PMID:22293790

  10. Impact of BAC limit reduction on different population segments: a Poisson fixed effect analysis.

    PubMed

    Kaplan, Sigal; Prato, Carlo Giacomo

    2007-11-01

    Over the past few decades, several countries enacted the reduction of the legal blood alcohol concentration (BAC) limit, often alongside the administrative license revocation or suspension, to battle drinking-and-driving behavior. Several researchers investigated the effectiveness of these policies by applying different analysis procedures, while assuming population homogeneity in responding to these laws. The present analysis focuses on the evaluation of the impact of BAC limit reduction on different population segments. Poisson regression models, adapted to account for possible observation dependence over time and state specific effects, are estimated to measure the reduction of the number of alcohol-related accidents and fatalities for single-vehicle accidents in 22 U.S. jurisdictions over a period of 15 years starting in 1990. Model estimates demonstrate that, for alcohol-related single-vehicle crashes, (i) BAC laws are more effective in terms of reduction of number of casualties rather than number of accidents, (ii) women and elderly population exhibit higher law compliance with respect to men and to young adult and adult population, respectively, and (iii) the presence of passengers in the vehicle enhances the sense of responsibility of the driver. PMID:17920837

  11. Non-linear properties of metallic cellular materials with a negative Poisson's ratio

    NASA Technical Reports Server (NTRS)

    Choi, J. B.; Lakes, R. S.

    1992-01-01

    Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.

  12. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. PMID:25385093

  13. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  14. Splines for Diffeomorphic Image Regression

    PubMed Central

    Singh, Nikhil; Niethammer, Marc

    2016-01-01

    This paper develops a method for splines on diffeomorphisms for image regression. In contrast to previously proposed methods to capture image changes over time, such as geodesic regression, the method can capture more complex spatio-temporal deformations. In particular, it is a first step towards capturing periodic motions for example of the heart or the lung. Starting from a variational formulation of splines the proposed approach allows for the use of temporal control points to control spline behavior. This necessitates the development of a shooting formulation for splines. Experimental results are shown for synthetic and real data. The performance of the method is compared to geodesic regression. PMID:25485370

  15. SLIT ADJUSTMENT CLAMP

    DOEpatents

    McKenzie, K.R.

    1959-07-01

    An electrode support which permits accurate alignment and adjustment of the electrode in a plurality of planes and about a plurality of axes in a calutron is described. The support will align the slits in the electrode with the slits of an ionizing chamber so as to provide for the egress of ions. The support comprises an insulator, a leveling plate carried by the insulator and having diametrically opposed attaching screws screwed to the plate and the insulator and diametrically opposed adjusting screws for bearing against the insulator, and an electrode associated with the plate for adjustment therewith.

  16. CMS Frailty Adjustment Model

    PubMed Central

    Kautter, John; Pope, Gregory C.

    2004-01-01

    The authors document the development of the CMS frailty adjustment model, a Medicare payment approach that adjusts payments to a Medicare managed care organization (MCO) according to the functional impairment of its community-residing enrollees. Beginning in 2004, this approach is being applied to certain organizations, such as Program of All-Inclusive Care for the Elderly (PACE), that specialize in providing care to the community-residing frail elderly. In the future, frailty adjustment could be extended to more Medicare managed care organizations. PMID:25372243

  17. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  18. Deformations of non-semisimple Poisson pencils of hydrodynamic type

    NASA Astrophysics Data System (ADS)

    Della Vedova, Alberto; Lorenzoni, Paolo; Savoldi, Andrea

    2016-09-01

    We study the deformations of two-component non-semisimple Poisson pencils of hydrodynamic type associated with Balinskiǐ–Novikov algebras. We show that in most cases the second order deformations are parametrized by two functions of a single variable. We find that one function is invariant with respect to the subgroup of Miura transformations, preserving the dispersionless limit, and another function is related to a one-parameter family of truncated structures. In two exceptional cases the second order deformations are parametrized by four functions. Among these two are invariants and two are related to a two-parameter family of truncated structures. We also study the lift of the deformations of n-component semisimple structures. This example suggests that deformations of non-semisimple pencils corresponding to the lifted invariant parameters are unobstructed.

  19. An alternating minimization method for blind deconvolution from Poisson data

    NASA Astrophysics Data System (ADS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-10-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters.

  20. Note on the Poisson structure of the damped oscillator

    SciTech Connect

    Hone, A. N. W.; Senthilvelan, M.

    2009-10-15

    The damped harmonic oscillator is one of the most studied systems with respect to the problem of quantizing dissipative systems. Recently Chandrasekar et al. [J. Math. Phys. 48, 032701 (2007)] applied the Prelle-Singer method to construct conserved quantities and an explicit time-independent Lagrangian and Hamiltonian structure for the damped oscillator. Here we describe the associated Poisson bracket which generates the continuous flow, pointing out that there is a subtle problem of definition on the whole phase space. The action-angle variables for the system are also presented, and we further explain how to extend these considerations to the discrete setting. Some implications for the quantum case are briefly mentioned.

  1. Analytical stress intensity solution for the Stable Poisson Loaded specimen

    NASA Astrophysics Data System (ADS)

    Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.

    1993-04-01

    An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMODs) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMODs, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

  2. Numerical calibration of the stable poisson loaded specimen

    NASA Astrophysics Data System (ADS)

    Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.

    1992-10-01

    An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

  3. Analytical stress intensity solution for the stable Poisson loaded specimen

    NASA Astrophysics Data System (ADS)

    Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.

    1993-04-01

    An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

  4. Application of the sine-Poisson equation in solar magnetostatics

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Zank, G. P.

    1990-01-01

    Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.

  5. Poisson's ratios of auxetic and other technological materials.

    PubMed

    Ballato, Arthur

    2010-01-01

    Poisson's ratio, the relation between lateral contraction of a thin, linearly elastic rod when subjected to a longitudinal extension, has a long and interesting history. For isotropic bodies, it can theoretically range from +1/2 to -1; the experimental gamut for anisotropics is even larger. The ratio is positive for all combinations of directions in most crystals. But as far back as the 1800s, Voigt and others found that negative values were encountered for some materials, a property now called auxeticity. Here we examine this property from the point of view of crystal stability and compute extrema of the ratio for various interesting and technologically important materials. Potential applications of the auxetic property are mentioned. PMID:20040420

  6. The relative risk in a cohort study with Poisson cases.

    PubMed

    Mulder, P G

    1988-01-01

    This paper deals with making statistical inference about the relative risk (or risk ratio) in a cohort (or prospective) study with dichotomous exposure when the number of cases is a Poisson distributed variable. The exact procedure for testing the null hypothesis for the relative risk and the exact computation of its confidence interval for a single 2 X 2 table is presented. Maximum likelihood methods and the homogeneity test are presented for the common risk ratio when data is stratified in several 2 X 2 tables. These methods are based upon a sufficient statistic and therefore are considered proper statistical alternatives to the more descriptive epidemiological measures such as (in)directly standardized mortality (morbidity) ratios. All computations can be done on a programmable pocket calculator. With the HP-41 CV more than 70 strata can be distinguished. PMID:3180748

  7. Numerical calibration of the stable poisson loaded specimen

    NASA Technical Reports Server (NTRS)

    Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.

    1992-01-01

    An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

  8. A bivariate survival model with compound Poisson frailty

    PubMed Central

    Wienke, A.; Ripatti, S.; Palmgren, J.; Yashin, A.

    2015-01-01

    A correlated frailty model is suggested for analysis of bivariate time-to-event data. The model is an extension of the correlated power variance function (PVF) frailty model (correlated three-parameter frailty model). It is based on a bivariate extension of the compound Poisson frailty model in univariate survival analysis. It allows for a non-susceptible fraction (of zero frailty) in the population, overcoming the common assumption in survival analysis that all individuals are susceptible to the event under study. The model contains the correlated gamma frailty model and the correlated inverse Gaussian frailty model as special cases. A maximum likelihood estimation procedure for the parameters is presented and its properties are studied in a small simulation study. This model is applied to breast cancer incidence data of Swedish twins. The proportion of women susceptible to breast cancer is estimated to be 15 per cent. PMID:19856276

  9. Beyond Poisson-Boltzmann: Numerical Sampling of Charge Density Fluctuations.

    PubMed

    Poitevin, Frédéric; Delarue, Marc; Orland, Henri

    2016-07-01

    We present a method aimed at sampling charge density fluctuations in Coulomb systems. The derivation follows from a functional integral representation of the partition function in terms of charge density fluctuations. Starting from the mean-field solution given by the Poisson-Boltzmann equation, an original approach is proposed to numerically sample fluctuations around it, through the propagation of a Langevin-like stochastic partial differential equation (SPDE). The diffusion tensor of the SPDE can be chosen so as to avoid the numerical complexity linked to long-range Coulomb interactions, effectively rendering the theory completely local. A finite-volume implementation of the SPDE is described, and the approach is illustrated with preliminary results on the study of a system made of two like-charge ions immersed in a bath of counterions. PMID:27075231

  10. Nonstationary elementary-field light randomly triggered by Poisson impulses.

    PubMed

    Fernández-Pousa, Carlos R

    2013-05-01

    A stochastic theory of nonstationary light describing the random emission of elementary pulses is presented. The emission is governed by a nonhomogeneous Poisson point process determined by a time-varying emission rate. The model describes, in the appropriate limits, stationary, cyclostationary, locally stationary, and pulsed radiation, and reduces to a Gaussian theory in the limit of dense emission rate. The first- and second-order coherence theories are solved after the computation of second- and fourth-order correlation functions by use of the characteristic function. The ergodicity of second-order correlations under various types of detectors is explored and a number of observables, including optical spectrum, amplitude, and intensity correlations, are analyzed. PMID:23695325

  11. Numerical Solution of the Gyrokinetic Poisson Equation in TEMPEST

    NASA Astrophysics Data System (ADS)

    Dorr, Milo; Cohen, Bruce; Cohen, Ronald; Dimits, Andris; Hittinger, Jeffrey; Kerbel, Gary; Nevins, William; Rognlien, Thomas; Umansky, Maxim; Xiong, Andrew; Xu, Xueqiao

    2006-10-01

    The gyrokinetic Poisson (GKP) model in the TEMPEST continuum gyrokinetic edge plasma code yields the electrostatic potential due to the charge density of electrons and an arbitrary number of ion species including the effects of gyroaveraging in the limit kρ1. The TEMPEST equations are integrated as a differential algebraic system involving a nonlinear system solve via Newton-Krylov iteration. The GKP preconditioner block is inverted using a multigrid preconditioned conjugate gradient (CG) algorithm. Electrons are treated as kinetic or adiabatic. The Boltzmann relation in the adiabatic option employs flux surface averaging to maintain neutrality within field lines and is solved self-consistently with the GKP equation. A decomposition procedure circumvents the near singularity of the GKP Jacobian block that otherwise degrades CG convergence.

  12. Identifying Seismicity Levels via Poisson Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Orfanogiannaki, K.; Karlis, D.; Papadopoulos, G. A.

    2010-08-01

    Poisson Hidden Markov models (PHMMs) are introduced to model temporal seismicity changes. In a PHMM the unobserved sequence of states is a finite-state Markov chain and the distribution of the observation at any time is Poisson with rate depending only on the current state of the chain. Thus, PHMMs allow a region to have varying seismicity rate. We applied the PHMM to model earthquake frequencies in the seismogenic area of Killini, Ionian Sea, Greece, between period 1990 and 2006. Simulations of data from the assumed model showed that it describes quite well the true data. The earthquake catalogue is dominated by main shocks occurring in 1993, 1997 and 2002. The time plot of PHMM seismicity states not only reproduces the three seismicity clusters but also quantifies the seismicity level and underlies the degree of strength of the serial dependence of the events at any point of time. Foreshock activity becomes quite evident before the three sequences with the gradual transition to states of cascade seismicity. Traditional analysis, based on the determination of highly significant changes of seismicity rates, failed to recognize foreshocks before the 1997 main shock due to the low number of events preceding that main shock. Then, PHMM has better performance than traditional analysis since the transition from one state to another does not only depend on the total number of events involved but also on the current state of the system. Therefore, PHMM recognizes significant changes of seismicity soon after they start, which is of particular importance for real-time recognition of foreshock activities and other seismicity changes.

  13. Abstract Expression Grammar Symbolic Regression

    NASA Astrophysics Data System (ADS)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  14. Multiple Regression and Its Discontents

    ERIC Educational Resources Information Center

    Snell, Joel C.; Marsh, Mitchell

    2012-01-01

    Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.

  15. Time-Warped Geodesic Regression

    PubMed Central

    Hong, Yi; Singh, Nikhil; Kwitt, Roland; Niethammer, Marc

    2016-01-01

    We consider geodesic regression with parametric time-warps. This allows, for example, to capture saturation effects as typically observed during brain development or degeneration. While highly-flexible models to analyze time-varying image and shape data based on generalizations of splines and polynomials have been proposed recently, they come at the cost of substantially more complex inference. Our focus in this paper is therefore to keep the model and its inference as simple as possible while allowing to capture expected biological variation. We demonstrate that by augmenting geodesic regression with parametric time-warp functions, we can achieve comparable flexibility to more complex models while retaining model simplicity. In addition, the time-warp parameters provide useful information of underlying anatomical changes as demonstrated for the analysis of corpora callosa and rat calvariae. We exemplify our strategy for shape regression on the Grassmann manifold, but note that the method is generally applicable for time-warped geodesic regression. PMID:25485368

  16. Marginalized zero-inflated negative binomial regression with application to dental caries.

    PubMed

    Preisser, John S; Das, Kalyan; Long, D Leann; Divaris, Kimon

    2016-05-10

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared with marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034

  17. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  18. Regression methods for spatial data

    NASA Technical Reports Server (NTRS)

    Yakowitz, S. J.; Szidarovszky, F.

    1982-01-01

    The kriging approach, a parametric regression method used by hydrologists and mining engineers, among others also provides an error estimate the integral of the regression function. The kriging method is explored and some of its statistical characteristics are described. The Watson method and theory are extended so that the kriging features are displayed. Theoretical and computational comparisons of the kriging and Watson approaches are offered.

  19. Wrong Signs in Regression Coefficients

    NASA Technical Reports Server (NTRS)

    McGee, Holly

    1999-01-01

    When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.

  20. Remotely Adjustable Hydraulic Pump

    NASA Technical Reports Server (NTRS)

    Kouns, H. H.; Gardner, L. D.

    1987-01-01

    Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.

  1. Shrinkage regression-based methods for microarray missing value imputation

    PubMed Central

    2013-01-01

    Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159

  2. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  3. Permeation through an open channel: Poisson-Nernst-Planck theory of a synthetic ionic channel.

    PubMed Central

    Chen, D; Lear, J; Eisenberg, B

    1997-01-01

    The synthetic channel [acetyl-(LeuSerSerLeuLeuSerLeu)3-CONH2]6 (pore diameter approximately 8 A, length approximately 30 A) is a bundle of six alpha-helices with blocked termini. This simple channel has complex properties, which are difficult to explain, even qualitatively, by traditional theories: its single-channel currents rectify in symmetrical solutions and its selectivity (defined by reversal potential) is a sensitive function of bathing solution. These complex properties can be fit quantitatively if the channel has fixed charge at its ends, forming a kind of macrodipole, bracketing a central charged region, and the shielding of the fixed charges is described by the Poisson-Nernst-Planck (PNP) equations. PNP fits current voltage relations measured in 15 solutions with an r.m.s. error of 3.6% using four adjustable parameters: the diffusion coefficients in the channel's pore DK = 2.1 x 10(-6) and DCl = 2.6 x 10(-7) cm2/s; and the fixed charge at the ends of the channel of +/- 0.12e (with unequal densities 0.71 M = 0.021e/A on the N-side and -1.9 M = -0.058e/A on the C-side). The fixed charge in the central region is 0.31e (with density P2 = 0.47 M = 0.014e/A). In contrast to traditional theories, PNP computes the electric field in the open channel from all of the charges in the system, by a rapid and accurate numerical procedure. In essence, PNP is a theory of the shielding of fixed (i.e., permanent) charge of the channel by mobile charge and by the ionic atmosphere in and near the channel's pore. The theory fits a wide range of data because the ionic contents and potential profile in the channel change significantly with experimental conditions, as they must, if the channel simultaneously satisfies the Poisson and Nernst-Planck equations and boundary conditions. Qualitatively speaking, the theory shows that small changes in the ionic atmosphere of the channel (i.e., shielding) make big changes in the potential profile and even bigger changes in flux, because

  4. Incorporating Dipolar Solvents with Variable Density in Poisson-Boltzmann Electrostatics

    PubMed Central

    Azuara, Cyril; Orland, Henri; Bon, Michael; Koehl, Patrice; Delarue, Marc

    2008-01-01

    We describe a new way to calculate the electrostatic properties of macromolecules that goes beyond the classical Poisson-Boltzmann treatment with only a small extra CPU cost. The solvent region is no longer modeled as a homogeneous dielectric media but rather as an assembly of self-orienting interacting dipoles of variable density. The method effectively unifies both the Poisson-centric view and the Langevin Dipole model. The model results in a variable dielectric constant \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}{\\epsilon}({\\vec{r}})\\end{equation*}\\end{document} in the solvent region and also in a variable solvent density \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}{\\rho}({\\vec{r}})\\end{equation*}\\end{document} that depends on the nature of the closest exposed solute atoms. The model was calibrated using small molecules and ions solvation data with only two adjustable parameters, namely the size and dipolar moment of the solvent. Hydrophobicity scales derived from the solvent density profiles agree very well with independently derived hydrophobicity scales, both at the atomic or residue level. Dimerization interfaces in homodimeric proteins or lipid-binding regions in membrane proteins clearly appear as poorly solvated patches on the solute accessible surface. Comparison of the thermally averaged solvent density of this model with the one derived from molecular dynamics simulations shows qualitative agreement on a coarse-grained level. Because this calculation is much more

  5. Modeling Repeated Count Data: Some Extensions of the Rasch Poisson Counts Model.

    ERIC Educational Resources Information Center

    Duijn, Marijtje A. J. van; Jansen, Margo G. H.

    1995-01-01

    The Rasch Poisson Counts Model, a unidimensional latent trait model for tests that postulates that intensity parameters are products of test difficulty and subject ability parameters, is expanded into the Dirichlet-Gamma-Poisson model that takes into account variation between subjects and interaction between subjects and tests. (SLD)

  6. Comment on: ‘A Poisson resampling method for simulating reduced counts in nuclear medicine images’

    NASA Astrophysics Data System (ADS)

    de Nijs, Robin

    2015-07-01

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  7. Urinary arsenic concentration adjustment factors and malnutrition.

    PubMed

    Nermell, Barbro; Lindberg, Anna-Lena; Rahman, Mahfuzar; Berglund, Marika; Persson, Lars Ake; El Arifeen, Shams; Vahter, Marie

    2008-02-01

    This study aims at evaluating the suitability of adjusting urinary concentrations of arsenic, or any other urinary biomarker, for variations in urine dilution by creatinine and specific gravity in a malnourished population. We measured the concentrations of metabolites of inorganic arsenic, creatinine and specific gravity in spot urine samples collected from 1466 individuals, 5-88 years of age, in Matlab, rural Bangladesh, where arsenic-contaminated drinking water and malnutrition are prevalent (about 30% of the adults had body mass index (BMI) below 18.5 kg/m(2)). The urinary concentrations of creatinine were low; on average 0.55 g/L in the adolescents and adults and about 0.35 g/L in the 5-12 years old children. Therefore, adjustment by creatinine gave much higher numerical values for the urinary arsenic concentrations than did the corresponding data expressed as microg/L, adjusted by specific gravity. As evaluated by multiple regression analyses, urinary creatinine, adjusted by specific gravity, was more affected by body size, age, gender and season than was specific gravity. Furthermore, urinary creatinine was found to be significantly associated with urinary arsenic, which further disqualifies the creatinine adjustment. PMID:17900556

  8. Interpretation of Standardized Regression Coefficients in Multiple Regression.

    ERIC Educational Resources Information Center

    Thayer, Jerome D.

    The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for variables…

  9. Demosaicing Based on Directional Difference Regression and Efficient Regression Priors.

    PubMed

    Wu, Jiqing; Timofte, Radu; Van Gool, Luc

    2016-08-01

    Color demosaicing is a key image processing step aiming to reconstruct the missing pixels from a recorded raw image. On the one hand, numerous interpolation methods focusing on spatial-spectral correlations have been proved very efficient, whereas they yield a poor image quality and strong visible artifacts. On the other hand, optimization strategies, such as learned simultaneous sparse coding and sparsity and adaptive principal component analysis-based algorithms, were shown to greatly improve image quality compared with that delivered by interpolation methods, but unfortunately are computationally heavy. In this paper, we propose efficient regression priors as a novel, fast post-processing algorithm that learns the regression priors offline from training data. We also propose an independent efficient demosaicing algorithm based on directional difference regression, and introduce its enhanced version based on fused regression. We achieve an image quality comparable to that of the state-of-the-art methods for three benchmarks, while being order(s) of magnitude faster. PMID:27254866

  10. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  11. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  12. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  13. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers.

    PubMed

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2010-01-12

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  14. Continental crust composition constrained by measurements of crustal Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Zandt, George; Ammon, Charles J.

    1995-03-01

    DECIPHERING the geological evolution of the Earth's continental crust requires knowledge of its bulk composition and global variability. The main uncertainties are associated with the composition of the lower crust. Seismic measurements probe the elastic properties of the crust at depth, from which composition can be inferred. Of particular note is Poisson's ratio,Σ ; this elastic parameter can be determined uniquely from the ratio of P- to S-wave seismic velocity, and provides a better diagnostic of crustal composition than either P- or S-wave velocity alone1. Previous attempts to measure Σ have been limited by difficulties in obtaining coincident P- and S-wave data sampling the entire crust2. Here we report 76 new estimates of crustal Σ spanning all of the continents except Antarctica. We find that, on average, Σ increases with the age of the crust. Our results strongly support the presence of a mafic lower crust beneath cratons, and suggest either a uniformitarian craton formation process involving delamination of the lower crust during continental collisions, followed by magmatic underplating, or a model in which crust formation processes have changed since the Precambrian era.

  15. Error propagation in PIV-based Poisson pressure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2015-11-01

    After more than 20 years of development, PIV has become a standard non-invasive velocity field measurement technique, and promises to make PIV-based pressure calculations possible. However, the errors inherent in PIV velocity fields propagate through integration and contaminate the calculated pressure field. We propose an analysis that shows how the uncertainties in the velocity field propagate to the pressure field through the Poisson equation. First we model the dynamics of error propagation using boundary value problems (BVPs). Next, L2-norm and/or L∞-norm are utilized as the measure of error in the velocity and pressure field. Finally, using analysis techniques including the maximum principle, the Poincare inequality pressure field can be bounded by the error level of the data by considering the well-posedness of the BVPs. Specifically, we exam if and how the error in the pressure field depend continually on the BVP data. Factors such as flow field geometry, boundary conditions, and velocity field noise levels will be discussed analytically.

  16. The Poisson Gamma distribution for wind speed data

    NASA Astrophysics Data System (ADS)

    Ćakmakyapan, Selen; Özel, Gamze

    2016-04-01

    The wind energy is one of the most significant alternative clean energy source and rapidly developing renewable energy sources in the world. For the evaluation of wind energy potential, probability density functions (pdfs) are usually used to model wind speed distributions. The selection of the appropriate pdf reduces the wind power estimation error and also allow to achieve characteristics. In the literature, different pdfs used to model wind speed data for wind energy applications. In this study, we propose a new probability distribution to model the wind speed data. Firstly, we defined the new probability distribution named Poisson-Gamma (PG) distribution and we analyzed a wind speed data sets which are about five pressure degree for the station. We obtained the data sets from Turkish State Meteorological Service. Then, we modelled the data sets with Exponential, Weibull, Lomax, 3 parameters Burr, Gumbel, Gamma, Rayleigh which are used to model wind speed data, and PG distributions. Finally, we compared the distribution, to select the best fitted model and demonstrated that PG distribution modeled the data sets better.

  17. Poisson process approximation for sequence repeats, and sequencing by hybridization.

    PubMed

    Arratia, R; Martin, D; Reinert, G; Waterman, M S

    1996-01-01

    Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds. PMID:8891959

  18. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  19. Partial least squares Cox regression for genome-wide data.

    PubMed

    Nygård, Ståle; Borgan, Ornulf; Lingjaerde, Ole Christian; Størvold, Hege Leite

    2008-06-01

    Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park etal. (Bioinformatics 18(Suppl. 1):S120-S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of Park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of Park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods. PMID:18188699

  20. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study

    PubMed Central

    Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

    2015-01-01

    Background. Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients. PMID:26413142

  1. Rural to Urban Adjustment

    ERIC Educational Resources Information Center

    Abramson, Jane A.

    Personal interviews with 100 former farm operators living in Saskatoon, Saskatchewan, were conducted in an attempt to understand the nature of the adjustment process caused by migration from rural to urban surroundings. Requirements for inclusion in the study were that respondents had owned or operated a farm for at least 3 years, had left their…

  2. Self adjusting inclinometer

    DOEpatents

    Hunter, Steven L.

    2002-01-01

    An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.

  3. Self Adjusting Sunglasses

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.

  4. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. PMID:26743048

  5. Cactus: An Introduction to Regression

    ERIC Educational Resources Information Center

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  6. Regression modelling of Dst index

    NASA Astrophysics Data System (ADS)

    Parnowski, Aleksei

    We developed a new approach to the problem of real-time space weather indices forecasting using readily available data from ACE and a number of ground stations. It is based on the regression modelling method [1-3], which combines the benefits of empirical and statistical approaches. Mathematically it is based upon the partial regression analysis and Monte Carlo simulations to deduce the empirical relationships in the system. The typical elapsed time per forecast is a few seconds on an average PC. This technique can be easily extended to other indices like AE and Kp. The proposed system can also be useful for investigating physical phenomena related to interactions between the solar wind and the magnetosphere -it already helped uncovering two new geoeffective parameters. 1. Parnowski A.S. Regression modeling method of space weather prediction // Astrophysics Space Science. — 2009. — V. 323, 2. — P. 169-180. doi:10.1007/s10509-009-0060-4 [arXiv:0906.3271] 2. Parnovskiy A.S. Regression Modeling and its Application to the Problem of Prediction of Space Weather // Journal of Automation and Information Sciences. — 2009. — V. 41, 5. — P. 61-69. doi:10.1615/JAutomatInfScien.v41.i5.70 3. Parnowski A.S. Statistically predicting Dst without satellite data // Earth, Planets and Space. — 2009. — V. 61, 5. — P. 621-624.

  7. Fungible Weights in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.

    2008-01-01

    Every set of alternate weights (i.e., nonleast squares weights) in a multiple regression analysis with three or more predictors is associated with an infinite class of weights. All members of a given class can be deemed "fungible" because they yield identical "SSE" (sum of squared errors) and R[superscript 2] values. Equations for generating…

  8. Spontaneous regression of breast cancer.

    PubMed

    Lewison, E F

    1976-11-01

    The dramatic but rare regression of a verified case of breast cancer in the absence of adequate, accepted, or conventional treatment has been observed and documented by clinicians over the course of many years. In my practice limited to diseases of the breast, over the past 25 years I have observed 12 patients with a unique and unusual clinical course valid enough to be regarded as spontaneous regression of breast cancer. These 12 patients, with clinically confirmed breast cancer, had temporary arrest or partial remission of their disease in the absence of complete or adequate treatment. In most of these cases, spontaneous regression could not be equated ultimately with permanent cure. Three of these case histories are summarized, and patient characteristics of pertinent clinical interest in the remaining case histories are presented and discussed. Despite widespread doubt and skepticism, there is ample clinical evidence to confirm the fact that spontaneous regression of breast cancer is a rare phenomenon but is real and does occur. PMID:799758

  9. Regression Models of Atlas Appearance

    PubMed Central

    Rohlfing, Torsten; Sullivan, Edith V.; Pfefferbaum, Adolf

    2010-01-01

    Models of object appearance based on principal components analysis provide powerful and versatile tools in computer vision and medical image analysis. A major shortcoming is that they rely entirely on the training data to extract principal modes of appearance variation and ignore underlying variables (e.g., subject age, gender). This paper introduces an appearance modeling framework based instead on generalized multi-linear regression. The training of regression appearance models is controlled by independent variables. This makes it straightforward to create model instances for specific values of these variables, which is akin to model interpolation. We demonstrate the new framework by creating an appearance model of the human brain from MR images of 36 subjects. Instances of the model created for different ages are compared with average shape atlases created from age-matched sub-populations. Relative tissue volumes vs. age in models are also compared with tissue volumes vs. subject age in the original images. In both experiments, we found excellent agreement between the regression models and the comparison data. We conclude that regression appearance models are a promising new technique for image analysis, with one potential application being the representation of a continuum of mutually consistent, age-specific atlases of the human brain. PMID:19694260

  10. Correlation Weights in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.; Jones, Jeff A.

    2010-01-01

    A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…

  11. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  12. Regression models of atlas appearance.

    PubMed

    Rohlfing, Torsten; Sullivan, Edith V; Pfefferbaum, Adolf

    2009-01-01

    Models of object appearance based on principal components analysis provide powerful and versatile tools in computer vision and medical image analysis. A major shortcoming is that they rely entirely on the training data to extract principal modes of appearance variation and ignore underlying variables (e.g., subject age, gender). This paper introduces an appearance modeling framework based instead on generalized multi-linear regression. The training of regression appearance models is controlled by independent variables. This makes it straightforward to create model instances for specific values of these variables, which is akin to model interpolation. We demonstrate the new framework by creating an appearance model of the human brain from MR images of 36 subjects. Instances of the model created for different ages are compared with average shape atlases created from age-matched sub-populations. Relative tissue volumes vs. age in models are also compared with tissue volumes vs. subject age in the original images. In both experiments, we found excellent agreement between the regression models and the comparison data. We conclude that regression appearance models are a promising new technique for image analysis, with one potential application being the representation of a continuum of mutually consistent, age-specific atlases of the human brain. PMID:19694260

  13. Ridge Regression for Interactive Models.

    ERIC Educational Resources Information Center

    Tate, Richard L.

    1988-01-01

    An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…

  14. Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors

    PubMed Central

    Woodard, Dawn B.; Crainiceanu, Ciprian; Ruppert, David

    2013-01-01

    We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials. PMID:24293988

  15. Poisson-Lie T-duals of the bi-Yang-Baxter models

    NASA Astrophysics Data System (ADS)

    Klimčík, Ctirad

    2016-09-01

    We prove the conjecture of Sfetsos, Siampos and Thompson that suitable analytic continuations of the Poisson-Lie T-duals of the bi-Yang-Baxter sigma models coincide with the recently introduced generalized λ-models. We then generalize this result by showing that the analytic continuation of a generic σ-model of "universal WZW-type" introduced by Tseytlin in 1993 is nothing but the Poisson-Lie T-dual of a generic Poisson-Lie symmetric σ-model introduced by Klimčík and Ševera in 1995.

  16. Universal Negative Poisson Ratio of Self-Avoiding Fixed-Connectivity Membranes

    SciTech Connect

    Bowick, M.; Cacciuto, A.; Thorleifsson, G.; Travesset, A.

    2001-10-01

    We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be {sigma}=-0.37(6) , in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes {sigma}=-0.32(4) . Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science.

  17. Universal negative poisson ratio of self-avoiding fixed-connectivity membranes.

    PubMed

    Bowick, M; Cacciuto, A; Thorleifsson, G; Travesset, A

    2001-10-01

    We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be sigma = -0.37(6), in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes sigma = -0.32(4). Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science. PMID:11580677

  18. Blow-up conditions for two dimensional modified Euler-Poisson equations

    NASA Astrophysics Data System (ADS)

    Lee, Yongki

    2016-09-01

    The multi-dimensional Euler-Poisson system describes the dynamic behavior of many important physical flows, yet as a hyperbolic system its solution can blow-up for some initial configurations. This article strives to advance our understanding on the critical threshold phenomena through the study of a two-dimensional modified Euler-Poisson system with a modified Riesz transform where the singularity at the origin is removed. We identify upper-thresholds for finite time blow-up of solutions for the modified Euler-Poisson equations with attractive/repulsive forcing.

  19. 3D Regression Heat Map Analysis of Population Study Data.

    PubMed

    Klemm, Paul; Lawonn, Kai; Glaßer, Sylvia; Niemann, Uli; Hegenscheid, Katrin; Völzke, Henry; Preim, Bernhard

    2016-01-01

    Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease. PMID:26529689

  20. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program

  1. 3DGRAPE - THREE DIMENSIONAL GRIDS ABOUT ANYTHING BY POISSON'S EQUATION

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.

    1994-01-01

    The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids. 3DGRAPE is designed to make computational grids in or about almost any shape. These grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. 3DGRAPE uses zones to solve the problem of warping one cube into the physical domain in real-world computational fluid dynamics problems. In a zonal approach, a physical domain is divided into regions, each of which maps into its own computational cube. It is believed that even the most complicated physical region can be divided into zones, and since it is possible to warp a cube into each zone, a grid generator which is oriented to zones and allows communication across zonal boundaries (where appropriate) solves the problem of topological complexity. 3DGRAPE expects to read in already-distributed x,y,z coordinates on the bodies of interest, coordinates which will remain fixed during the entire grid-generation process. The 3DGRAPE code makes no attempt to fit given body shapes and redistribute points thereon. Body-fitting is a formidable problem in itself. The user must either be working with some simple analytical body shape, upon which a simple analytical distribution can be easily effected, or must have available some sophisticated stand-alone body-fitting software. 3DGRAPE does not require the user to supply the block-to-block boundaries nor the shapes of the distribution of points. 3DGRAPE will typically supply those block-to-block boundaries simply as surfaces in the elliptic grid. Thus at block-to-block boundaries the following conditions are obtained: (1) grids lines will

  2. Psychosocial adjustment to ALS: a longitudinal study

    PubMed Central

    Matuz, Tamara; Birbaumer, Niels; Hautzinger, Martin; Kübler, Andrea

    2015-01-01

    For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS. PMID:26441696

  3. Improving phylogenetic regression under complex evolutionary models.

    PubMed

    Mazel, Florent; Davies, T Jonathan; Georges, Damien; Lavergne, Sébastien; Thuiller, Wilfried; Peres-NetoO, Pedro R

    2016-02-01

    Phylogenetic Generalized Least Square (PGLS) is the tool of choice among phylogenetic comparative methods to measure the correlation between species features such as morphological and life-history traits or niche characteristics. In its usual form, it assumes that the residual variation follows a homogenous model of evolution across the branches of the phylogenetic tree. Since a homogenous model of evolution is unlikely to be realistic in nature, we explored the robustness of the phylogenetic regression when this assumption is violated. We did so by simulating a set of traits under various heterogeneous models of evolution, and evaluating the statistical performance (type I error [the percentage of tests based on samples that incorrectly rejected a true null hypothesis] and power [the percentage of tests that correctly rejected a false null hypothesis]) of classical phylogenetic regression. We found that PGLS has good power but unacceptable type I error rates. This finding is important since this method has been increasingly used in comparative analyses over the last decade. To address this issue, we propose a simple solution based on transforming the underlying variance-covariance matrix to adjust for model heterogeneity within PGLS. We suggest that heterogeneous rates of evolution might be particularly prevalent in large phylogenetic trees, while most current approaches assume a homogenous rate of evolution. Our analysis demonstrates that overlooking rate heterogeneity can result in inflated type I errors, thus misleading comparative analyses. We show that it is possible to correct for this bias even when the underlying model of evolution is not known a priori. PMID:27145604

  4. Precision adjustable stage

    DOEpatents

    Cutburth, Ronald W.; Silva, Leonard L.

    1988-01-01

    An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.

  5. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  6. Regression analysis of networked data

    PubMed Central

    Zhou, Yan; Song, Peter X.-K.

    2016-01-01

    This paper concerns regression methodology for assessing relationships between multi-dimensional response variables and covariates that are correlated within a network. To address analytical challenges associated with the integration of network topology into the regression analysis, we propose a hybrid quadratic inference method that uses both prior and data-driven correlations among network nodes. A Godambe information-based tuning strategy is developed to allocate weights between the prior and data-driven network structures, so the estimator is efficient. The proposed method is conceptually simple and computationally fast, and has appealing large-sample properties. It is evaluated by simulation, and its application is illustrated using neuroimaging data from an association study of the effects of iron deficiency on auditory recognition memory in infants. PMID:27279658

  7. Adolescent suicide attempts and adult adjustment

    PubMed Central

    Brière, Frédéric N.; Rohde, Paul; Seeley, John R.; Klein, Daniel; Lewinsohn, Peter M.

    2014-01-01

    Background Adolescent suicide attempts are disproportionally prevalent and frequently of low severity, raising questions regarding their long-term prognostic implications. In this study, we examined whether adolescent attempts were associated with impairments related to suicidality, psychopathology, and psychosocial functioning in adulthood (objective 1) and whether these impairments were better accounted for by concurrent adolescent confounders (objective 2). Method 816 adolescents were assessed using interviews and questionnaires at four time points from adolescence to adulthood. We examined whether lifetime suicide attempts in adolescence (by T2, mean age 17) predicted adult outcomes (by T4, mean age 30) using linear and logistic regressions in unadjusted models (objective 1) and adjusting for sociodemographic background, adolescent psychopathology, and family risk factors (objective 2). Results In unadjusted analyses, adolescent suicide attempts predicted poorer adjustment on all outcomes, except those related to social role status. After adjustment, adolescent attempts remained predictive of axis I and II psychopathology (anxiety disorder, antisocial and borderline personality disorder symptoms), global and social adjustment, risky sex, and psychiatric treatment utilization. However, adolescent attempts no longer predicted most adult outcomes, notably suicide attempts and major depressive disorder. Secondary analyses indicated that associations did not differ by sex and attempt characteristics (intent, lethality, recurrence). Conclusions Adolescent suicide attempters are at high risk of protracted and wide-ranging impairments, regardless of the characteristics of their attempt. Although attempts specifically predict (and possibly influence) several outcomes, results suggest that most impairments reflect the confounding contributions of other individual and family problems or vulnerabilites in adolescent attempters. PMID:25421360

  8. Activity of Excitatory Neuron with Delayed Feedback Stimulated with Poisson Stream is Non-Markov

    NASA Astrophysics Data System (ADS)

    Vidybida, Alexander K.

    2015-09-01

    For a class of excitatory spiking neuron models with delayed feedback fed with a Poisson stochastic process, it is proven that the stream of output interspike intervals cannot be presented as a Markov process of any order.

  9. On bounds in Poisson approximation for distributions of independent negative-binomial distributed random variables.

    PubMed

    Hung, Tran Loc; Giang, Le Truong

    2016-01-01

    Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note. PMID:26844026

  10. Particle trapping: A key requisite of structure formation and stability of Vlasov–Poisson plasmas

    SciTech Connect

    Schamel, Hans

    2015-04-15

    Particle trapping is shown to control the existence of undamped coherent structures in Vlasov–Poisson plasmas and thereby affects the onset of plasma instability beyond the realm of linear Landau theory.

  11. A Hands-on Activity for Teaching the Poisson Distribution Using the Stock Market

    ERIC Educational Resources Information Center

    Dunlap, Mickey; Studstill, Sharyn

    2014-01-01

    The number of increases a particular stock makes over a fixed period follows a Poisson distribution. This article discusses using this easily-found data as an opportunity to let students become involved in the data collection and analysis process.

  12. Accurate Young's modulus measurement based on Rayleigh wave velocity and empirical Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Li, Mingxia; Feng, Zhihua

    2016-07-01

    This paper presents a method for Young's modulus measurement based on Rayleigh wave speed. The error in Poisson's ratio has weak influence on the measurement of Young's modulus based on Rayleigh wave speed, and Poisson's ratio minimally varies in a certain material; thus, we can accurately estimate Young's modulus with surface wave speed and a rough Poisson's ratio. We numerically analysed three methods using Rayleigh, longitudinal, and transversal wave speed, respectively, and the error in Poisson's ratio shows the least influence on the result in the method involving Rayleigh wave speed. An experiment was performed and has proved the feasibility of this method. Device for speed measuring could be small, and no sample pretreatment is needed. Hence, developing a portable instrument based on this method is possible. This method makes a good compromise between usability and precision.

  13. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  14. Global Existence for the Vlasov-Poisson System in Bounded Domains

    NASA Astrophysics Data System (ADS)

    Hwang, Hyung Ju; Velázquez, Juan J. L.

    2010-03-01

    In this paper we prove global existence for solutions of the Vlasov-Poisson system in convex bounded domains with specular boundary conditions and with a prescribed outward electrical field at the boundary.

  15. Mediating Effects of Relationships with Mentors on College Adjustment

    ERIC Educational Resources Information Center

    Lenz, A. Stephen

    2014-01-01

    This study examined the relationship between student adjustment to college and relational health with peers, mentors, and the community. Data were collected from 80 undergraduate students completing their first semester of course work at a large university in the mid-South. A series of simultaneous multiple regression analyses indicated that…

  16. The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership

    ERIC Educational Resources Information Center

    Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.

    2011-01-01

    The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…

  17. Effects of Relational Authenticity on Adjustment to College

    ERIC Educational Resources Information Center

    Lenz, A. Stephen; Holman, Rachel L.; Lancaster, Chloe; Gotay, Stephanie G.

    2016-01-01

    The authors examined the association between relational health and student adjustment to college. Data were collected from 138 undergraduate students completing their 1st semester at a large university in the mid-southern United States. Regression analysis indicated that higher levels of relational authenticity were a predictor of success during…

  18. On deformations of one-dimensional Poisson structures of hydrodynamic type with degenerate metric

    NASA Astrophysics Data System (ADS)

    Savoldi, Andrea

    2016-06-01

    We provide a complete list of two- and three-component Poisson structures of hydrodynamic type with degenerate metric, and study their homogeneous deformations. In the non-degenerate case any such deformation is trivial, that is, can be obtained via Miura transformations. We demonstrate that in the degenerate case this class of deformations is non-trivial, and depends on a certain number of arbitrary functions. This shows that the second Poisson-Lichnerowicz cohomology group does not vanish.

  19. Young's moduli and Poisson's ratios of curvilinear anisotropic hexagonal and rhombohedral nanotubes. Nanotubes-auxetics

    NASA Astrophysics Data System (ADS)

    Goldstein, R. V.; Gorodtsov, V. A.; Lisovenko, D. S.

    2013-09-01

    The study of materials with unusual mechanical properties has attracts a lot of attention in view of new possibilities for their application. One of these properties is negative Poisson's ratio which is commonly found in crystalline materials (materials with linear anisotropy). However, until now the capabilities of negative Poisson's ratios in tubular crystals (materials with curvilinear anisotropy), e.g., in today's popular nanotubes, have not been studied.

  20. Quality Reporting of Multivariable Regression Models in Observational Studies

    PubMed Central

    Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M.

    2016-01-01

    Abstract Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE. Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model. The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0–30.3) of the articles and 18.5% (95% CI: 14.8–22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor. A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature. PMID:27196467

  1. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum

  2. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

    NASA Technical Reports Server (NTRS)

    Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

    2007-01-01

    A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

  3. Regression Discontinuity Design: Simulation and Application in Two Cardiovascular Trials with Continuous Outcomes.

    PubMed

    van Leeuwen, Nikki; Lingsma, Hester F; de Craen, Anton J M; Nieboer, Daan; Mooijaart, Simon P; Richard, Edo; Steyerberg, Ewout W

    2016-07-01

    In epidemiology, the regression discontinuity design has received increasing attention recently and might be an alternative to randomized controlled trials (RCTs) to evaluate treatment effects. In regression discontinuity, treatment is assigned above a certain threshold of an assignment variable for which the treatment effect is adjusted in the analysis. We performed simulations and a validation study in which we used treatment effect estimates from an RCT as the reference for a prospectively performed regression discontinuity study. We estimated the treatment effect using linear regression adjusting for the assignment variable both as linear terms and restricted cubic spline and using local linear regression models. In the first validation study, the estimated treatment effect from a cardiovascular RCT was -4.0 mmHg blood pressure (95% confidence interval: -5.4, -2.6) at 2 years after inclusion. The estimated effect in regression discontinuity was -5.9 mmHg (95% confidence interval: -10.8, -1.0) with restricted cubic spline adjustment. Regression discontinuity showed different, local effects when analyzed with local linear regression. In the second RCT, regression discontinuity treatment effect estimates on total cholesterol level at 3 months after inclusion were similar to RCT estimates, but at least six times less precise. In conclusion, regression discontinuity may provide similar estimates of treatment effects to RCT estimates, but requires the assumption of a global treatment effect over the range of the assignment variable. In addition to a risk of bias due to wrong assumptions, researchers need to weigh better recruitment against the substantial loss in precision when considering a study with regression discontinuity versus RCT design. PMID:27031038

  4. Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited

    SciTech Connect

    Akbari-Moghanjoughi, M.

    2015-02-15

    In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads to a k{sup 4} term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos(2k{sub F}r)/r{sup 3}, which is a consequence of the divergence of the dielectric function at point k = 2k{sub F} in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the

  5. Multilevel Methods for the Poisson-Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Holst, Michael Jay

    We consider the numerical solution of the Poisson -Boltzmann equation (PBE), a three-dimensional second order nonlinear elliptic partial differential equation arising in biophysics. This problem has several interesting features impacting numerical algorithms, including discontinuous coefficients representing material interfaces, rapid nonlinearities, and three spatial dimensions. Similar equations occur in various applications, including nuclear physics, semiconductor physics, population genetics, astrophysics, and combustion. In this thesis, we study the PBE, discretizations, and develop multilevel-based methods for approximating the solutions of these types of equations. We first outline the physical model and derive the PBE, which describes the electrostatic potential of a large complex biomolecule lying in a solvent. We next study the theoretical properties of the linearized and nonlinear PBE using standard function space methods; since this equation has not been previously studied theoretically, we provide existence and uniqueness proofs in both the linearized and nonlinear cases. We also analyze box-method discretizations of the PBE, establishing several properties of the discrete equations which are produced. In particular, we show that the discrete nonlinear problem is well-posed. We study and develop linear multilevel methods for interface problems, based on algebraic enforcement of Galerkin or variational conditions, and on coefficient averaging procedures. Using a stencil calculus, we show that in certain simplified cases the two approaches are equivalent, with different averaging procedures corresponding to different prolongation operators. We also develop methods for nonlinear problems based on a nonlinear multilevel method, and on linear multilevel methods combined with a globally convergent damped-inexact-Newton method. We derive a necessary and sufficient descent condition for the inexact-Newton direction, enabling the development of extremely

  6. Heteroscedastic transformation cure regression models.

    PubMed

    Chen, Chyong-Mei; Chen, Chen-Hsin

    2016-06-30

    Cure models have been applied to analyze clinical trials with cures and age-at-onset studies with nonsusceptibility. Lu and Ying (On semiparametric transformation cure model. Biometrika 2004; 91:331?-343. DOI: 10.1093/biomet/91.2.331) developed a general class of semiparametric transformation cure models, which assumes that the failure times of uncured subjects, after an unknown monotone transformation, follow a regression model with homoscedastic residuals. However, it cannot deal with frequently encountered heteroscedasticity, which may result from dispersed ranges of failure time span among uncured subjects' strata. To tackle the phenomenon, this article presents semiparametric heteroscedastic transformation cure models. The cure status and the failure time of an uncured subject are fitted by a logistic regression model and a heteroscedastic transformation model, respectively. Unlike the approach of Lu and Ying, we derive score equations from the full likelihood for estimating the regression parameters in the proposed model. The similar martingale difference function to their proposal is used to estimate the infinite-dimensional transformation function. Our proposed estimating approach is intuitively applicable and can be conveniently extended to other complicated models when the maximization of the likelihood may be too tedious to be implemented. We conduct simulation studies to validate large-sample properties of the proposed estimators and to compare with the approach of Lu and Ying via the relative efficiency. The estimating method and the two relevant goodness-of-fit graphical procedures are illustrated by using breast cancer data and melanoma data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26887342

  7. Regression analysis of cytopathological data

    SciTech Connect

    Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.

    1982-12-01

    Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.

  8. Continuously adjustable Pulfrich spectacles

    NASA Astrophysics Data System (ADS)

    Jacobs, Ken; Karpf, Ron

    2011-03-01

    A number of Pulfrich 3-D movies and TV shows have been produced, but the standard implementation has inherent drawbacks. The movie and TV industries have correctly concluded that the standard Pulfrich 3-D implementation is not a useful 3-D technique. Continuously Adjustable Pulfrich Spectacles (CAPS) is a new implementation of the Pulfrich effect that allows any scene containing movement in a standard 2-D movie, which are most scenes, to be optionally viewed in 3-D using inexpensive viewing specs. Recent scientific results in the fields of human perception, optoelectronics, video compression and video format conversion are translated into a new implementation of Pulfrich 3- D. CAPS uses these results to continuously adjust to the movie so that the viewing spectacles always conform to the optical density that optimizes the Pulfrich stereoscopic illusion. CAPS instantly provides 3-D immersion to any moving scene in any 2-D movie. Without the glasses, the movie will appear as a normal 2-D image. CAPS work on any viewing device, and with any distribution medium. CAPS is appropriate for viewing Internet streamed movies in 3-D.

  9. Subsea adjustable choke valves

    SciTech Connect

    Cyvas, M.K. )

    1989-08-01

    With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.

  10. Multiatlas segmentation as nonparametric regression.

    PubMed

    Awate, Suyash P; Whitaker, Ross T

    2014-09-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

  11. On the validity of the Poisson assumption in sampling nanometer-sized aerosols

    SciTech Connect

    Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn

    2014-01-01

    A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air with a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.

  12. 77 FR 40387 - Price Adjustment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ... Price Adjustment AGENCY: Postal Regulatory Commission. ACTION: Notice. SUMMARY: The Commission is noticing a recently filed Postal Service request to adjust prices for several market dominant products... announcing its intent to adjust prices for several market dominant products within First-Class Mail...

  13. Practical Session: Multiple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).

  14. Adjustment of selection index coefficients and polygenic variance to improve regressions and reliability of genomic evaluations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In multi-step genomic evaluations, direct genomic values (DGV) are computed using either marker effects or genomic relationships among the genotyped animals, and information from non-genotyped ancestors is included later by selection index. The DGV, the traditional evaluation (EBV), and a subset bre...

  15. A flexible count data regression model for risk analysis.

    PubMed

    Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P

    2008-02-01

    In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets. PMID:18304118

  16. Ultra-soft 100 nm thick zero Poisson's ratio film with 60% reversible compressibility

    NASA Astrophysics Data System (ADS)

    Nguyen, Chieu; Szalewski, Steve; Saraf, Ravi

    2013-03-01

    Squeezing films of most solids, liquids and granular materials causes dilation in the lateral dimension which is characterized by a positive Poisson's ratio. Auxetic materials, such as, special foams, crumpled graphite, zeolites, spectrin/actin membrane, and carbon nanotube laminates shrink, i.e., their Poisson's ratio is negative. As a result of Poisson's effect, the force to squeeze an amorphous material, such as a viscous thin film coating adhered to rigid surface increases by over million fold as the thickness decreases from 10 μm to 100 nm due to constrain on lateral deformations and off-plane relaxation. We demonstrate, ultra-soft, 100 nm films of polymer/nanoparticle composite adhered to 1.25 cm diameter glass that can be reversibly squeezed over 60% strain between rigid plates requiring (very) low stresses below 100 KPa. Unlike non-zero Poisson's ratio materials, stiffness decreases with thickness, and the stress distribution is uniform over the film as mapped electro-optically. The high deformability at very low stresses is explained by considering reentrant cellular structure found in cork and the wings of beetles that have Poisson's ratio near zero.

  17. A Novel Method for the Accurate Evaluation of Poisson's Ratio of Soft Polymer Materials

    PubMed Central

    Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S.; Kang, Dong-Joong; Park, Sungchan

    2013-01-01

    A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6–47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials. PMID:23737733

  18. Species abundance in a forest community in South China: A case of poisson lognormal distribution

    USGS Publications Warehouse

    Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.

    2005-01-01

    Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.

  19. Psychosocial Predictors of Adjustment among First Year College of Education Students

    ERIC Educational Resources Information Center

    Salami, Samuel O.

    2011-01-01

    The purpose of this study was to examine the contribution of psychological and social factors to the prediction of adjustment to college. A total of 250 first year students from colleges of education in Kwara State, Nigeria, completed measures of self-esteem, emotional intelligence, stress, social support and adjustment. Regression analyses…

  20. The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Harik, Polina; Clauser, Brian E.

    2011-01-01

    Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…

  1. Influence of Parenting Styles on the Adjustment and Academic Achievement of Traditional College Freshmen.

    ERIC Educational Resources Information Center

    Hickman, Gregory P.; Bartholomae, Suzanne; McKenry, Patrick C.

    2000-01-01

    Examines the relationship between parenting styles and academic achievement and adjustment of traditional college freshmen (N=101). Multiple regression models indicate that authoritative parenting style was positively related to student's academic adjustment. Self-esteem was significantly predictive of social, personal-emotional, goal…

  2. Poisson-Fokker-Planck model for biomolecules translocation through nanopore driven by electroosmotic flow

    NASA Astrophysics Data System (ADS)

    Lin, XiaoHui; Zhang, ChiBin; Gu, Jun; Jiang, ShuYun; Yang, JueKuan

    2014-11-01

    A non-continuous electroosmotic flow model (PFP model) is built based on Poisson equation, Fokker-Planck equation and Navier-Stokse equation, and used to predict the DNA molecule translocation through nanopore. PFP model discards the continuum assumption of ion translocation and considers ions as discrete particles. In addition, this model includes the contributions of Coulomb electrostatic potential between ions, Brownian motion of ions and viscous friction to ion transportation. No ionic diffusion coefficient and other phenomenological parameters are needed in the PFP model. It is worth noting that the PFP model can describe non-equilibrium electroosmotic transportation of ions in a channel of a size comparable with the mean free path of ion. A modified clustering method is proposed for the numerical solution of PFP model, and ion current translocation through nanopore with a radius of 1 nm is simulated using the modified clustering method. The external electric field, wall charge density of nanopore, surface charge density of DNA, as well as ion average number density, influence the electroosmotic velocity profile of electrolyte solution, the velocity of DNA translocation through nanopore and ion current blockade. Results show that the ion average number density of electrolyte and surface charge density of nanopore have a significant effect on the translocation velocity of DNA and the ion current blockade. The translocation velocity of DNA is proportional to the surface charge density of nanopore, and is inversely proportional to ion average number density of electrolyte solution. Thus, the translocation velocity of DNAs can be controlled to improve the accuracy of sequencing by adjusting the external electric field, ion average number density of electrolyte and surface charge density of nanopore. Ion current decreases when the ion average number density is larger than the critical value and increases when the ion average number density is lower than the

  3. Residuals and regression diagnostics: focusing on logistic regression.

    PubMed

    Zhang, Zhongheng

    2016-05-01

    Up to now I have introduced most steps in regression model building and validation. The last step is to check whether there are observations that have significant impact on model coefficient and specification. The article firstly describes plotting Pearson residual against predictors. Such plots are helpful in identifying non-linearity and provide hints on how to transform predictors. Next, I focus on observations of outlier, leverage and influence that may have significant impact on model building. Outlier is such an observation that its response value is unusual conditional on covariate pattern. Leverage is an observation with covariate pattern that is far away from the regressor space. Influence is the product of outlier and leverage. That is, when influential observation is dropped from the model, there will be a significant shift of the coefficient. Summary statistics for outlier, leverage and influence are studentized residuals, hat values and Cook's distance. They can be easily visualized with graphs and formally tested using the car package. PMID:27294091

  4. Determinants of adjustment for children of divorcing parents.

    PubMed

    Oppenheimer, K; Prinz, R J; Bella, B S

    1990-01-01

    Family physicians frequently see children and parents when they are adjusting to marital separation. This study examined how well child adjustment at school could be determined from an assessment of interspousal relations, maternal functioning, and child perception variables. Teachers evaluated adaptive functioning, social withdrawal, and aggressive behavior at school for a carefully selected sample of 22 boys and 24 girls (ages 7-12) whose parents had been separated for two to 18 months. Regression analyses indicated that boys' overall school adjustment was associated with better maternal parenting skills, lower child fear of abandonment, less blaming of father for the separation, and positive parental verbal attributions toward the other parent. Girls with better overall school adjustment reported less blaming of their mothers and a higher rate of positive attributions by mother about father. These findings suggest concepts family physicians can use in working with families to minimize the effect of divorce on children. PMID:2323490

  5. Semiparametric regression during 2003–2007*

    PubMed Central

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2010-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800

  6. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  7. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  8. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  9. Standards for Standardized Logistic Regression Coefficients

    ERIC Educational Resources Information Center

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  10. Developmental Regression in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Rogers, Sally J.

    2004-01-01

    The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…

  11. Racial identity and reflected appraisals as influences on Asian Americans' racial adjustment.

    PubMed

    Alvarez, A N; Helms, J E

    2001-08-01

    J. E. Helms's (1990) racial identity psychodiagnostic model was used to examine the contribution of racial identity schemas and reflected appraisals to the development of healthy racial adjustment of Asian American university students (N = 188). Racial adjustment was operationally defined as collective self-esteem and awareness of anti-Asian racism. Multiple regression analyses suggested that racial identity schemas and reflected appraisals were significantly predictive of Asian Americans' racial adjustment. Implications for counseling and future research are discussed. PMID:11506069

  12. Existence and uniqueness of two dimensional Euler-Poisson system and WKB approximation to the nonlinear Schrödinger-Poisson system

    NASA Astrophysics Data System (ADS)

    Masaki, Satoshi; Ogawa, Takayoshi

    2015-12-01

    In this paper, we study a dispersive Euler-Poisson system in two dimensional Euclidean space. Our aim is to show unique existence and the zero-dispersion limit of the time-local weak solution. Since one may not use dispersive structure in the zero-dispersion limit, when reducing the regularity, lack of critical embedding H1⊊L∞ becomes a bottleneck. We hence employ an estimate on the best constant of the Gagliardo-Nirenberg inequality. By this argument, a reasonable convergence rate for the zero-dispersion limit is deduced with a slight loss. We also consider the semiclassical limit problem of the Schrödinger-Poisson system in two dimensions.

  13. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA. PMID:23741284

  14. Stationary response of multi-degree-of-freedom vibro-impact systems to Poisson white noises

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Zhu, W. Q.

    2008-01-01

    The stationary response of multi-degree-of-freedom (MDOF) vibro-impact (VI) systems to random pulse trains is studied. The system is formulated as a stochastically excited and dissipated Hamiltonian system. The constraints are modeled as non-linear springs according to the Hertz contact law. The random pulse trains are modeled as Poisson white noises. The approximate stationary probability density function (PDF) for the response of MDOF dissipated Hamiltonian systems to Poisson white noises is obtained by solving the fourth-order generalized Fokker-Planck-Kolmogorov (FPK) equation using perturbation approach. As examples, two-degree-of-freedom (2DOF) VI systems under external and parametric Poisson white noise excitations, respectively, are investigated. The validity of the proposed approach is confirmed by using the results obtained from Monte Carlo simulation. It is shown that the non-Gaussian behaviour depends on the product of the mean arrival rate of the impulses and the relaxation time of the oscillator.

  15. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  16. Resonant ultrasound spectroscopy of cylinders over the full range of Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Jaglinski, Tim; Lakes, Roderic S.

    2011-03-01

    Mode structure maps for freely vibrating cylinders over a range of Poisson's ratio, ν, are desirable for the design and interpretation of experiments using resonant ultrasound spectroscopy (RUS). The full range of isotropic ν (-1 to +0.5) is analyzed here using a finite element method to accommodate materials with a negative Poisson's ratio. The fundamental torsional mode has the lowest frequency provided ν is between about -0.24 and +0.5. For any ν, the torsional mode can be identified utilizing the polarization sensitivity of the shear transducers. RUS experimental results for materials with Poisson's ratio +0.3, +0.16, and -0.3 and a previous numerical study for ν = 0.33 are compared with the present analysis. Interpretation of results is easiest if the length/diameter ratio of the cylinder is close to 1. Slight material anisotropy leads to splitting of the higher modes but not of the fundamental torsion mode.

  17. On the influence of reflective boundary conditions on the statistics of Poisson-Kac diffusion processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2016-05-01

    We analyze the influence of reflective boundary conditions on the statistics of Poisson-Kac diffusion processes, and specifically how they modify the Poissonian switching-time statistics. After addressing simple cases such as diffusion in a channel, and the switching statistics in the presence of a polarization potential, we thoroughly study Poisson-Kac diffusion in fractal domains. Diffusion in fractal spaces highlights neatly how the modification in the switching-time statistics associated with reflections against a complex and fractal boundary induces new emergent features of Poisson-Kac diffusion leading to a transition from a regular behavior at shorter timescales to emerging anomalous diffusion properties controlled by walk dimensionality of the fractal set.

  18. A special relation between Young's modulus, Rayleigh-wave velocity, and Poisson's ratio.

    PubMed

    Malischewsky, Peter G; Tuan, Tran Thanh

    2009-12-01

    Bayon et al. [(2005). J. Acoust. Soc. Am. 117, 3469-3477] described a method for the determination of Young's modulus by measuring the Rayleigh-wave velocity and the ellipticity of Rayleigh waves, and found a peculiar almost linear relation between a non-dimensional quantity connecting Young's modulus, Rayleigh-wave velocity and density, and Poisson's ratio. The analytical reason for this special behavior remained unclear. It is demonstrated here that this behavior is a simple consequence of the mathematical form of the Rayleigh-wave velocity as a function of Poisson's ratio. The consequences for auxetic materials (those materials for which Poisson's ratio is negative) are discussed, as well as the determination of the shear and bulk moduli. PMID:20000895

  19. Heterogeneous PVA hydrogels with micro-cells of both positive and negative Poisson's ratios.

    PubMed

    Ma, Yanxuan; Zheng, Yudong; Meng, Haoye; Song, Wenhui; Yao, Xuefeng; Lv, Hexiang

    2013-07-01

    Many models describing the deformation of general foam or auxetic materials are based on the assumption of homogeneity and order within the materials. However, non-uniform heterogeneity is often an inherent nature in many porous materials and composites, but difficult to measure. In this work, inspired by the structures of auxetic materials, the porous PVA hydrogels with internal inby-concave pores (IICP) or interconnected pores (ICP) were designed and processed. The deformation of the PVA hydrogels under compression was tested and their Poisson's ratio was characterized. The results indicated that the size, shape and distribution of the pores in the hydrogel matrix had strong influence on the local Poisson's ratio, which varying from positive to negative at micro-scale. The size-dependency of their local Poisson's ratio reflected and quantified the uniformity and heterogeneity of the micro-porous structures in the PVA hydrogels. PMID:23648366

  20. Pointwise estimates of solutions for the multi-dimensional bipolar Euler-Poisson system

    NASA Astrophysics Data System (ADS)

    Wu, Zhigang; Li, Yeping

    2016-06-01

    In the paper, we consider a multi-dimensional bipolar hydrodynamic model from semiconductor devices and plasmas. This system takes the form of Euler-Poisson with electric field and frictional damping added to the momentum equations. By making a new analysis on Green's functions for the Euler system with damping and the Euler-Poisson system with damping, we obtain the pointwise estimates of the solution for the multi-dimensions bipolar Euler-Poisson system. As a by-product, we extend decay rates of the densities {ρ_i(i=1,2)} in the usual L 2-norm to the L p -norm with {p≥1} and the time-decay rates of the momentums m i ( i = 1,2) in the L 2-norm to the L p -norm with p > 1 and all of the decay rates here are optimal.

  1. Delay Adjusted Incidence Infographic

    Cancer.gov

    This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.

  2. Poisson-based self-organizing feature maps and hierarchical clustering for serial analysis of gene expression data.

    PubMed

    Wang, Haiying; Zheng, Huiru; Azuaje, Francisco

    2007-01-01

    Serial analysis of gene expression (SAGE) is a powerful technique for global gene expression profiling, allowing simultaneous analysis of thousands of transcripts without prior structural and functional knowledge. Pattern discovery and visualization have become fundamental approaches to analyzing such large-scale gene expression data. From the pattern discovery perspective, clustering techniques have received great attention. However, due to the statistical nature of SAGE data (i.e., underlying distribution), traditional clustering techniques may not be suitable for SAGE data analysis. Based on the adaptation and improvement of Self-Organizing Maps and hierarchical clustering techniques, this paper presents two new clustering algorithms, namely, PoissonS and PoissonHC, for SAGE data analysis. Tested on synthetic and experimental SAGE data, these algorithms demonstrate several advantages over traditional pattern discovery techniques. The results indicate that, by incorporating statistical properties of SAGE data, PoissonS and PoissonHC, as well as a hybrid approach (neuro-hierarchical approach) based on the combination of PoissonS and PoissonHC, offer significant improvements in pattern discovery and visualization for SAGE data. Moreover, a user-friendly platform, which may improve and accelerate SAGE data mining, was implemented. The system is freely available on request from the authors for nonprofit use. PMID:17473311

  3. Impacts of floods on dysentery in Xinxiang city, China, during 2004–2010: a time-series Poisson analysis

    PubMed Central

    Ni, Wei; Ding, Guoyong; Li, Yifei; Li, Hongkai; Jiang, Baofa

    2014-01-01

    Background Xinxiang, a city in Henan Province, suffered from frequent floods due to persistent and heavy precipitation from 2004 to 2010. In the same period, dysentery was a common public health problem in Xinxiang, with the proportion of reported cases being the third highest among all the notified infectious diseases. Objectives We focused on dysentery disease consequences of different degrees of floods and examined the association between floods and the morbidity of dysentery on the basis of longitudinal data during the study period. Design A time-series Poisson regression model was conducted to examine the relationship between 10 times different degrees of floods and the monthly morbidity of dysentery from 2004 to 2010 in Xinxiang. Relative risks (RRs) of moderate and severe floods on the morbidity of dysentery were calculated in this paper. In addition, we estimated the attributable contributions of moderate and severe floods to the morbidity of dysentery. Results A total of 7591 cases of dysentery were notified in Xinxiang during the study period. The effect of floods on dysentery was shown with a 0-month lag. Regression analysis showed that the risk of moderate and severe floods on the morbidity of dysentery was 1.55 (95% CI: 1.42–1.670) and 1.74 (95% CI: 1.56–1.94), respectively. The attributable risk proportions (ARPs) of moderate and severe floods to the morbidity of dysentery were 35.53 and 42.48%, respectively. Conclusions This study confirms that floods have significantly increased the risk of dysentery in the study area. In addition, severe floods have a higher proportional contribution to the morbidity of dysentery than moderate floods. Public health action should be taken to avoid and control a potential risk of dysentery epidemics after floods. PMID:25098726

  4. Estimating equivalence with quantile regression.

    PubMed

    Cade, Brian S

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. PMID:21516905

  5. Streamflow forecasting using functional regression

    NASA Astrophysics Data System (ADS)

    Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.

    2016-07-01

    Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.

  6. Insulin resistance: regression and clustering.

    PubMed

    Yoon, Sangho; Assimes, Themistocles L; Quertermous, Thomas; Hsiao, Chin-Fu; Chuang, Lee-Ming; Hwu, Chii-Min; Rajaratnam, Bala; Olshen, Richard A

    2014-01-01

    In this paper we try to define insulin resistance (IR) precisely for a group of Chinese women. Our definition deliberately does not depend upon body mass index (BMI) or age, although in other studies, with particular random effects models quite different from models used here, BMI accounts for a large part of the variability in IR. We accomplish our goal through application of Gauss mixture vector quantization (GMVQ), a technique for clustering that was developed for application to lossy data compression. Defining data come from measurements that play major roles in medical practice. A precise statement of what the data are is in Section 1. Their family structures are described in detail. They concern levels of lipids and the results of an oral glucose tolerance test (OGTT). We apply GMVQ to residuals obtained from regressions of outcomes of an OGTT and lipids on functions of age and BMI that are inferred from the data. A bootstrap procedure developed for our family data supplemented by insights from other approaches leads us to believe that two clusters are appropriate for defining IR precisely. One cluster consists of women who are IR, and the other of women who seem not to be. Genes and other features are used to predict cluster membership. We argue that prediction with "main effects" is not satisfactory, but prediction that includes interactions may be. PMID:24887437

  7. Measurement of Poisson's ratio of nonmetallic materials by laser holographic interferometry

    NASA Astrophysics Data System (ADS)

    Zhu, Jian T.

    1991-12-01

    By means of the off-axis collimated plane wave coherent light arrangement and a loading device by pure bending, Poisson's ratio values of CFRP (carbon fiber-reinforced plactics plates, lay-up 0 degree(s), 90 degree(s)), GFRP (glass fiber-reinforced plactics plates, radial direction) and PMMA (polymethyl methacrylate, x, y direction) have been measured. In virtue of this study, the ministry standard for the Ministry of Aeronautical Industry (Testing method for the measurement of Poisson's ratio of non-metallic by laser holographic interferometry) has been published. The measurement process is fast and simple. The measuring results are reliable and accurate.

  8. On Poisson's ratio for metal matrix composite laminates. [aluminum boron composites

    NASA Technical Reports Server (NTRS)

    Herakovich, C. T.; Shuart, M. J.

    1978-01-01

    The definition of Poisson's ratio for nonlinear behavior of metal matrix composite laminates is discussed and experimental results for tensile and compressive loading of five different boron-aluminum laminates are presented. It is shown that there may be considerable difference in the value of Poisson's ratio as defined by a total strain or an incremental strain definition. It is argued that the incremental definition is more appropriate for nonlinear material behavior. Results from a (0) laminate indicate that the incremental definition provides a precursor to failure which is not evident if the total strain definition is used.

  9. iAPBS: a programming interface to Adaptive Poisson-Boltzmann Solver (APBS).

    PubMed

    Konecny, Robert; Baker, Nathan A; McCammon, J Andrew

    2012-07-26

    The Adaptive Poisson-Boltzmann Solver (APBS) is a state-of-the-art suite for performing Poisson-Boltzmann electrostatic calculations on biomolecules. The iAPBS package provides a modular programmatic interface to the APBS library of electrostatic calculation routines. The iAPBS interface library can be linked with a FORTRAN or C/C++ program thus making all of the APBS functionality available from within the application. Several application modules for popular molecular dynamics simulation packages - Amber, NAMD and CHARMM are distributed with iAPBS allowing users of these packages to perform implicit solvent electrostatic calculations with APBS. PMID:22905037

  10. Incorporation of solvation effects into the fragment molecular orbital calculations with the Poisson-Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Watanabe, Hirofumi; Okiyama, Yoshio; Nakano, Tatsuya; Tanaka, Shigenori

    2010-11-01

    We developed FMO-PB method, which incorporates solvation effects into the Fragment Molecular Orbital calculation with the Poisson-Boltzmann equation. This method retains good accuracy in energy calculations with reduced computational time. We calculated the solvation free energies for polyalanines, Alpha-1 peptide, tryptophan cage, and complex of estrogen receptor and 17 β-estradiol to show the applicability of this method for practical systems. From the calculated results, it has been confirmed that the FMO-PB method is useful for large biomolecules in solution. We also discussed the electric charges which are used in solving the Poisson-Boltzmann equation.

  11. Linear stability of stationary solutions of the Vlasov-Poisson system in three dimensions

    SciTech Connect

    Batt, J.; Rein, G. . Mathematisches Inst.); Morrison, P.J. )

    1993-03-01

    Rigorous results on the stability of stationary solutions of the Vlasov-Poisson system are obtained in both the plasma physics and stellar dynamics contexts. It is proven that stationary solutions in the plasma physics (stellar dynamics) case are linearly stable if they are decreasing (increasing) functions of the local, i.e. particle, energy. The main tool in the analysis is the free energy of the system, a conserved quantity. In addition, an appropriate global existence result is proven for the linearized Vlasov-Poisson system and the existence of stationary solutions that satisfy the above stability condition is established.

  12. Dielectric Boundary Forces in Numerical Poisson-Boltzmann Methods: Theory and Numerical Strategies.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-10-01

    Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary forces based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems. PMID:22125339

  13. Dielectric boundary force in numerical Poisson-Boltzmann methods: Theory and numerical strategies

    NASA Astrophysics Data System (ADS)

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-10-01

    Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary force based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems.

  14. iAPBS: a programming interface to Adaptive Poisson-Boltzmann Solver

    SciTech Connect

    Konecny, Robert; Baker, Nathan A.; McCammon, J. A.

    2012-07-26

    The Adaptive Poisson-Boltzmann Solver (APBS) is a state-of-the-art suite for performing Poisson-Boltzmann electrostatic calculations on biomolecules. The iAPBS package provides a modular programmatic interface to the APBS library of electrostatic calculation routines. The iAPBS interface library can be linked with a Fortran or C/C++ program thus making all of the APBS functionality available from within the application. Several application modules for popular molecular dynamics simulation packages -- Amber, NAMD and CHARMM are distributed with iAPBS allowing users of these packages to perform implicit solvent electrostatic calculations with APBS.

  15. A Poisson equation formulation for pressure calculations in penalty finite element models for viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Sohn, J. L.; Heinrich, J. C.

    1990-01-01

    The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.

  16. Solution of the nonlinear Poisson-Boltzmann equation: Application to ionic diffusion in cementitious materials

    SciTech Connect

    Arnold, J.; Kosson, D.S.; Garrabrants, A.; Meeussen, J.C.L.; Sloot, H.A. van der

    2013-02-15

    A robust numerical solution of the nonlinear Poisson-Boltzmann equation for asymmetric polyelectrolyte solutions in discrete pore geometries is presented. Comparisons to the linearized approximation of the Poisson-Boltzmann equation reveal that the assumptions leading to linearization may not be appropriate for the electrochemical regime in many cementitious materials. Implications of the electric double layer on both partitioning of species and on diffusive release are discussed. The influence of the electric double layer on anion diffusion relative to cation diffusion is examined.

  17. Improved blowup theorems for the Euler-Poisson equations with attractive forces

    NASA Astrophysics Data System (ADS)

    Li, Rui; Lin, Xing; Ma, Zongwei

    2016-07-01

    Our discussion here mainly focuses on the formation of singularities for solutions to the N-dimensional Euler-Poisson equations with attractive forces, in radial symmetry. Motivated by the integration method of Yuen, we prove two blow-up results under the conditions that the solutions have compact radius R(t) or have no compact support restriction, which generalize the ones Yuen obtained in 2011 [M. W. Yuen, "Blowup for the C1 solution of the Euler-Poisson equations of gaseous stars in RN," J. Math. Anal. Appl. 383, 627-633 (2011)].

  18. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods. PMID:24723530

  19. Reentrant Origami-Based Metamaterials with Negative Poisson's Ratio and Bistability.

    PubMed

    Yasuda, H; Yang, J

    2015-05-01

    We investigate the unique mechanical properties of reentrant 3D origami structures based on the Tachi-Miura polyhedron (TMP). We explore the potential usage as mechanical metamaterials that exhibit tunable negative Poisson's ratio and structural bistability simultaneously. We show analytically and experimentally that the Poisson's ratio changes from positive to negative and vice versa during its folding motion. In addition, we verify the bistable mechanism of the reentrant 3D TMP under rigid origami configurations without relying on the buckling motions of planar origami surfaces. This study forms a foundation in designing and constructing TMP-based metamaterials in the form of bellowslike structures for engineering applications. PMID:26001009

  20. The noncommutative Poisson bracket and the deformation of the family algebras

    SciTech Connect

    Wei, Zhaoting

    2015-07-15

    The family algebras are introduced by Kirillov in 2000. In this paper, we study the noncommutative Poisson bracket P on the classical family algebra C{sub τ}(g). We show that P controls the first-order 1-parameter formal deformation from C{sub τ}(g) to Q{sub τ}(g) where the latter is the quantum family algebra. Moreover, we will prove that the noncommutative Poisson bracket is in fact a Hochschild 2-coboundary, and therefore, the deformation is infinitesimally trivial. In the last part of this paper, we discuss the relation between Mackey’s analogue and the quantization problem of the family algebras.

  1. Developmental regression in autism spectrum disorder

    PubMed Central

    Al Backer, Nouf Backer

    2015-01-01

    The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear. PMID:27493417

  2. A Survey of UML Based Regression Testing

    NASA Astrophysics Data System (ADS)

    Fahad, Muhammad; Nadeem, Aamer

    Regression testing is the process of ensuring software quality by analyzing whether changed parts behave as intended, and unchanged parts are not affected by the modifications. Since it is a costly process, a lot of techniques are proposed in the research literature that suggest testers how to build regression test suite from existing test suite with minimum cost. In this paper, we discuss the advantages and drawbacks of using UML diagrams for regression testing and analyze that UML model helps in identifying changes for regression test selection effectively. We survey the existing UML based regression testing techniques and provide an analysis matrix to give a quick insight into prominent features of the literature work. We discuss the open research issues like managing and reducing the size of regression test suite, prioritization of the test cases that would be helpful during strict schedule and resources that remain to be addressed for UML based regression testing.

  3. A Negative Binomial Regression Model for Accuracy Tests

    ERIC Educational Resources Information Center

    Hung, Lai-Fa

    2012-01-01

    Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…

  4. Random-Effects Regression Models for Clustered Data with an Example From Smoking Prevention Research.

    ERIC Educational Resources Information Center

    Hedeker, Donald; And Others

    1994-01-01

    Proposes random-effects regression model for analysis of clustered data. Suggests model assumes some dependency of within-cluster data. Model adjusts effects for resulting dependency from data clustering. Describes maximum marginal likelihood solution. Discusses available statistical software. Demonstrates model via investigation involving…

  5. About solvability of some boundary value problems for Poisson equation in a ball

    NASA Astrophysics Data System (ADS)

    Koshanova, Maira D.; Usmanov, Kairat I.; Turmetov, Batirkhan Kh.

    2016-08-01

    In the present paper, we study properties of some integro-differential operators of fractional order. As an application of the properties of these operators for Poisson equation we examine questions on solvability of a fractional analogue of the Neumann problem and analogues of periodic boundary value problems for circular domains. The exact conditions for solvability of these problems are found.

  6. Poisson ratio and excess low-frequency vibrational states in glasses

    NASA Astrophysics Data System (ADS)

    Duval, Eugène; Deschamps, Thierry; Saviot, Lucien

    2013-08-01

    In glass, starting from a dependence of the Angell's fragility on the Poisson ratio [V. N. Novikov and A. P. Sokolov, Nature 431, 961 (2004)], 10.1038/nature02947, and a dependence of the Poisson ratio on the atomic packing density [G. N. Greaves, A. L. Greer, R. S. Lakes, and T. Rouxel, Nature Mater. 10, 823 (2011)], 10.1038/nmat3134, we propose that the heterogeneities are predominantly density fluctuations in strong glasses (lower Poisson ratio) and shear elasticity fluctuations in fragile glasses (higher Poisson ratio). Because the excess of low-frequency vibration modes in comparison with the Debye regime (boson peak) is strongly connected to these fluctuations, we propose that they are breathing-like (with change of volume) in strong glasses and shear-like (without change of volume) in fragile glasses. As a verification, it is confirmed that the excess modes in the strong silica glass are predominantly breathing-like. Moreover, it is shown that the excess breathing-like modes in a strong polymeric glass are replaced by shear-like modes under hydrostatic pressure as the glass becomes more compact.

  7. Measurement of Young's modulus and Poisson's ratio of human hair using optical techniques

    NASA Astrophysics Data System (ADS)

    Hu, Zhenxing; Li, Gaosheng; Xie, Huimin; Hua, Tao; Chen, Pengwan; Huang, Fenglei

    2009-12-01

    Human hair is a complex nanocomposite fiber whose physical appearance and mechanical strength are governed by a variety of factors like ethnicity, cleaning, grooming, chemical treatments and environment. Characterization of mechanical properties of hair is essential to develop better cosmetic products and advance biological and cosmetic science. Hence the behavior of hair under tension is of interest to beauty care science. Human hair fibers experience tensile forces as they are groomed and styled. Previous researches about tensile testing of human hair were seemingly focused on the longitudinal direction, such as elastic modulus, yield strength, breaking strength and strain at break after different treatment. In this research, experiment of evaluating the mechanical properties of human hair, such as Young's modulus and Poisson's ratio, was designed and conducted. The principle of the experimental instrument was presented. The system of testing instrument to evaluate the Young's modulus and Poisson's ratio was introduced. The range of Poisson's ratio of the hair from the identical person was evaluated. Experiments were conducted for testing the mechanical properties after acid, aqueous alkali and neutral solution treatment of human hair. Explanation of Young's modulus and Poisson's ratio was conducted base on these results of experiments. These results can be useful to hair treatment and cosmetic product.

  8. Measurement of Young's modulus and Poisson's ratio of human hair using optical techniques

    NASA Astrophysics Data System (ADS)

    Hu, Zhenxing; Li, Gaosheng; Xie, Huimin; Hua, Tao; Chen, Pengwan; Huang, Fenglei

    2010-03-01

    Human hair is a complex nanocomposite fiber whose physical appearance and mechanical strength are governed by a variety of factors like ethnicity, cleaning, grooming, chemical treatments and environment. Characterization of mechanical properties of hair is essential to develop better cosmetic products and advance biological and cosmetic science. Hence the behavior of hair under tension is of interest to beauty care science. Human hair fibers experience tensile forces as they are groomed and styled. Previous researches about tensile testing of human hair were seemingly focused on the longitudinal direction, such as elastic modulus, yield strength, breaking strength and strain at break after different treatment. In this research, experiment of evaluating the mechanical properties of human hair, such as Young's modulus and Poisson's ratio, was designed and conducted. The principle of the experimental instrument was presented. The system of testing instrument to evaluate the Young's modulus and Poisson's ratio was introduced. The range of Poisson's ratio of the hair from the identical person was evaluated. Experiments were conducted for testing the mechanical properties after acid, aqueous alkali and neutral solution treatment of human hair. Explanation of Young's modulus and Poisson's ratio was conducted base on these results of experiments. These results can be useful to hair treatment and cosmetic product.

  9. Continuum description of the Poisson's ratio of ligament and tendon under finite deformation.

    PubMed

    Swedberg, Aaron M; Reese, Shawn P; Maas, Steve A; Ellis, Benjamin J; Weiss, Jeffrey A

    2014-09-22

    Ligaments and tendons undergo volume loss when stretched along the primary fiber axis, which is evident by the large, strain-dependent Poisson's ratios measured during quasi-static tensile tests. Continuum constitutive models that have been used to describe ligament material behavior generally assume incompressibility, which does not reflect the volumetric material behavior seen experimentally. We developed a strain energy equation that describes large, strain dependent Poisson's ratios and nonlinear, transversely isotropic behavior using a novel method to numerically enforce the desired volumetric behavior. The Cauchy stress and spatial elasticity tensors for this strain energy equation were derived and implemented in the FEBio finite element software (www.febio.org). As part of this objective, we derived the Cauchy stress and spatial elasticity tensors for a compressible transversely isotropic material, which to our knowledge have not appeared previously in the literature. Elastic simulations demonstrated that the model predicted the nonlinear, upwardly concave uniaxial stress-strain behavior while also predicting a strain-dependent Poisson's ratio. Biphasic simulations of stress relaxation predicted a large outward fluid flux and substantial relaxation of the peak stress. Thus, the results of this study demonstrate that the viscoelastic behavior of ligaments and tendons can be predicted by modeling fluid movement when combined with a large Poisson's ratio. Further, the constitutive framework provides the means for accurate simulations of ligament volumetric material behavior without the need to resort to micromechanical or homogenization methods, thus facilitating its use in large scale, whole joint models. PMID:25134434

  10. Continuum Description of the Poisson's Ratio of Ligament and Tendon Under Finite Deformation

    PubMed Central

    Swedberg, Aaron M.; Reese, Shawn P.; Maas, Steve A.; Ellis, Benjamin J.; Weiss, Jeffrey A.

    2014-01-01

    Ligaments and tendons undergo volume loss when stretched along the primary fiber axis, which is evident by the large, strain-dependent Poisson's ratios measured during quasi-static tensile tests. Continuum constitutive models that have been used to describe ligament material behavior generally assume incompressibility, which does not reflect the volumetric material behavior seen experimentally. We developed a strain energy equation that describes large, strain dependent Poisson's ratios and nonlinear, transversely isotropic behavior using a novel method to numerically enforce the desired volumetric behavior. The Cauchy stress and spatial elasticity tensors for this strain energy equation were derived and implemented in the FEBio finite element software (www.febio.org). As part of this objective, we derived the Cauchy stress and spatial elasticity tensors for a compressible transversely isotropic material, which to our knowledge have not appeared previously in the literature. Elastic simulations demonstrated that the model predicted the nonlinear, upwardly concave uniaxial stress-strain behavior while also predicting a strain-dependent Poisson's ratio. Biphasic simulations of stress relaxation predicted a large outward fluid flux and substantial relaxation of the peak stress. Thus, the results of this study demonstrate that the viscoelastic behavior of ligaments and tendons can be predicted by modeling fluid movement when combined with a large Poisson's ratio. Further, the constitutive framework provides the means for accurate simulations of ligament volumetric material behavior without the need to resort to micromechanical or homogenization methods, thus facilitating its use in large scale, whole joint models. PMID:25134434

  11. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  12. Lie-Poisson integrators for rigid body dynamics in the solar system

    NASA Astrophysics Data System (ADS)

    Touma, J.; Wisdom, J.

    1994-03-01

    The n-body mapping method of Wisdom & Holman (1991) is generalized to encompass rotational dynamics. The Lie-Poisson structure of rigid body dynamics is discussed. Integrators which preserve that structure are derived for the motion of a free rigid body and for the motion of rigid bodies interacting gravitationally with mass points.

  13. Recovering doping profiles in semiconductor devices with the Boltzmann-Poisson model

    NASA Astrophysics Data System (ADS)

    Cheng, Yingda; Gamba, Irene M.; Ren, Kui

    2011-05-01

    We investigate numerically an inverse problem related to the Boltzmann-Poisson system of equations for transport of electrons in semiconductor devices. The objective of the (ill-posed) inverse problem is to recover the doping profile of a device, presented as a source function in the mathematical model, from its current-voltage characteristics. To reduce the degree of ill-posedness of the inverse problem, we proposed to parameterize the unknown doping profile function to limit the number of unknowns in the inverse problem. We showed by numerical examples that the reconstruction of a few low moments of the doping profile is possible when relatively accurate time-dependent or time-independent measurements are available, even though the later reconstruction is less accurate than the former. We also compare reconstructions from the Boltzmann-Poisson (BP) model to those from the classical drift-diffusion-Poisson (DDP) model, assuming that measurements are generated with the BP model. We show that the two type of reconstructions can be significantly different in regimes where drift-diffusion-Poisson equation fails to model the physics accurately. However, when noise presented in measured data is high, no difference in the reconstructions can be observed.

  14. A Portrait of Poisson: A Fish out of Water Who Found His Calling.

    ERIC Educational Resources Information Center

    Geller, B.; Bruk, Y.

    1991-01-01

    Presents a brief historical sketch of the life and work of one of the founders of modern mathematical physics. Discusses three problem-solving applications of the Poisson distribution with examples from elementary probability theory. Provides background on two of his noteworthy results from the physics of oscillations and the deformation of rigid…

  15. Updating a Classic: "The Poisson Distribution and the Supreme Court" Revisited

    ERIC Educational Resources Information Center

    Cole, Julio H.

    2010-01-01

    W. A. Wallis studied vacancies in the US Supreme Court over a 96-year period (1837-1932) and found that the distribution of the number of vacancies per year could be characterized by a Poisson model. This note updates this classic study.

  16. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  17. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  18. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  19. 78 FR 62712 - Rate Adjustment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... Rate Adjustment AGENCY: Postal Regulatory Commission. ACTION: Notice. SUMMARY: The Commission is noticing a recent Postal Service filing seeking postal rate adjustments based on exigent circumstances... On September 26, 2013, the Postal Service filed an exigent rate request with the Commission...

  20. Adjustable holder for transducer mounting

    NASA Technical Reports Server (NTRS)

    Deotsch, R. C.

    1980-01-01

    Positioning of acoustic sensor, strain gage, or similar transducer is facilitated by adjustable holder. Developed for installation on Space Shuttle, it includes springs for maintaining uniform load on transducer with adjustable threaded cap for precisely controlling position of sensor with respect to surrounding structure.

  1. Spousal Adjustment to Myocardial Infarction.

    ERIC Educational Resources Information Center

    Ziglar, Elisa J.

    This paper reviews the literature on the stresses and coping strategies of spouses of patients with myocardial infarction (MI). It attempts to identify specific problem areas of adjustment for the spouse and to explore the effects of spousal adjustment on patient recovery. Chapter one provides an overview of the importance in examining the…

  2. Mood Adjustment via Mass Communication.

    ERIC Educational Resources Information Center

    Knobloch, Silvia

    2003-01-01

    Proposes and experimentally tests mood adjustment approach, complementing mood management theory. Discusses how results regarding self-exposure across time show that patterns of popular music listening among a group of undergraduate students differ with initial mood and anticipation, lending support to mood adjustment hypotheses. Describes how…

  3. Prevalence, correlates, and costs of patients with poor adjustment to mixed cancers.

    PubMed

    Butler, Lorna; Downe-Wamboldt, Barbara; Melanson, Patricia; Coulter, Lynn; Keefe, Janice; Singleton, Jerome; Bell, David

    2006-01-01

    Approximately 2% to 3% of the Canadian society has experienced cancer. Literature indicates that there is poor adjustment to chronic illness. Individuals with poor adjustment to chronic illness have been found to disproportionately use more health services. The purpose of this study was to determine the prevalence, correlates, and costs associated with poor adjustment to mixed cancer. A consecutive sample (n = 171) of breast, lung, and prostate cancer patients at the Nova Scotia Regional Cancer Center were surveyed. Twenty-eight percent of the cancer group showed fair to poor adjustment to illness using the Psychological Adjustment to Illness Self-report Scale Psychological Adjustment to Illness Self-Report Scale raw score. Poor adjustment was moderately correlated with depression (r = 0.50, P < .0001) and evasive coping (r = 0.38, P < .0001) and unrelated to demographic variables. Depression explained 25% of the variance in poor adjustment to illness in regression analysis. Cancer patients with fair to poor adjustment to illness had statistically significantly higher annual healthcare expenditures (P < .002) than those with good adjustment to illness. Expenditure findings agree with previous literature on chronic illnesses. The prevalence of fair to poor adjustment in this cancer population using the Psychological Adjustment to Illness Self-Report Scale measure is similar to that reported for chronic illness to date, suggesting that only those with better adjustment consented to this study. PMID:16557115

  4. Microscopic dynamics perspective on the relationship between Poisson's ratio and ductility of metallic glasses

    NASA Astrophysics Data System (ADS)

    Ngai, K. L.; Wang, Li-Min; Liu, Riping; Wang, W. H.

    2014-01-01

    In metallic glasses a clear correlation had been established between plasticity or ductility with the Poisson's ratio νPoisson and alternatively the ratio of the elastic bulk modulus to the shear modulus, K/G. Such a correlation between these two macroscopic mechanical properties is intriguing and is challenging to explain from the dynamics on a microscopic level. A recent experimental study has found a connection of ductility to the secondary β-relaxation in metallic glasses. The strain rate and temperature dependencies of the ductile-brittle transition are similar to the reciprocal of the secondary β-relaxation time, τβ. Moreover, metallic glass is more ductile if the relaxation strength of the β-relaxation is larger and τβ is shorter. The findings indicate the β-relaxation is related to and instrumental for ductility. On the other hand, K/G or νPoisson is related to the effective Debye-Waller factor (i.e., the non-ergodicity parameter), f0, characterizing the dynamics of a structural unit inside a cage formed by other units, and manifested as the nearly constant loss shown in the frequency dependent susceptibility. We make the connection of f0 to the non-exponentiality parameter n in the Kohlrausch stretched exponential correlation function of the structural α-relaxation function, φ (t) = exp [ { - ( {t/{τ _α }})^{1 - n} }]. This connection follows from the fact that both f0 and n are determined by the inter-particle potential, and 1/f0 or (1 - f0) and n both increase with anharmonicity of the potential. A well tested result from the Coupling Model is used to show that τβ is completely determined by τα and n. From the string of relations, (i) K/G or νPoisson with 1/f0 or (1 - f0), (ii) 1/f0 or (1 - f0) with n, and (iii) τα and n with τβ, we arrive at the desired relation between K/G or νPoisson and τβ. On combining this relation with that between ductility and τβ, we have finally an explanation of the empirical correlation between

  5. Regression approaches in the test-negative study design for assessment of influenza vaccine effectiveness.

    PubMed

    Bond, H S; Sullivan, S G; Cowling, B J

    2016-06-01

    Influenza vaccination is the most practical means available for preventing influenza virus infection and is widely used in many countries. Because vaccine components and circulating strains frequently change, it is important to continually monitor vaccine effectiveness (VE). The test-negative design is frequently used to estimate VE. In this design, patients meeting the same clinical case definition are recruited and tested for influenza; those who test positive are the cases and those who test negative form the comparison group. When determining VE in these studies, the typical approach has been to use logistic regression, adjusting for potential confounders. Because vaccine coverage and influenza incidence change throughout the season, time is included among these confounders. While most studies use unconditional logistic regression, adjusting for time, an alternative approach is to use conditional logistic regression, matching on time. Here, we used simulation data to examine the potential for both regression approaches to permit accurate and robust estimates of VE. In situations where vaccine coverage changed during the influenza season, the conditional model and unconditional models adjusting for categorical week and using a spline function for week provided more accurate estimates. We illustrated the two approaches on data from a test-negative study of influenza VE against hospitalization in children in Hong Kong which resulted in the conditional logistic regression model providing the best fit to the data. PMID:26732691

  6. Regression in schizophrenia and its therapeutic value.

    PubMed

    Yazaki, N

    1992-03-01

    Using the regression evaluation scale, 25 schizophrenic patients were classified into three groups of Dissolution/autism (DAUG), Dissolution----attachment (DATG) and Non-regression (NRG). The regression of DAUG was of the type in which autism occurred when destructiveness emerged, while the regression of DATG was of the type in which attachment occurred when destructiveness emerged. This suggests that the regressive phenomena are an actualized form of the approach complex. In order to determine the factors distinguishing these two groups, I investigated psychiatric symptoms, mother-child relationships, premorbid personalities and therapeutic interventions. I believe that these factors form a continuity in which they interrelatedly determine the regressive state. Foremost among them, I stressed the importance of the mother-child relationship. PMID:1353128

  7. Data Mining within a Regression Framework

    NASA Astrophysics Data System (ADS)

    Berk, Richard A.

    Regression analysis can imply a far wider range of statistical procedures than often appreciated. In this chapter, a number of common Data Mining procedures are discussed within a regression framework. These include non-parametric smoothers, classification and regression trees, bagging, and random forests. In each case, the goal is to characterize one or more of the distributional features of a response conditional on a set of predictors.

  8. LRGS: Linear Regression by Gibbs Sampling

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2016-02-01

    LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.

  9. Fitts’ Law in Early Postural Adjustments

    PubMed Central

    Bertucco, M.; Cesari, P.; Latash, M.L

    2012-01-01

    We tested a hypothesis that the classical relation between movement time and index of difficulty (ID) in quick pointing action (Fitts’ Law) reflects processes at the level of motor planning. Healthy subjects stood on a force platform and performed quick and accurate hand movements into targets of different size located at two distances. The movements were associated with early postural adjustments that are assumed to reflect motor planning processes. The short distance did not require trunk rotation, while the long distance did. As a result, movements over the long distance were associated with substantiual Coriolis forces. Movement kinematics and contact forces and moments recorded by the platform were studied. Movement time scaled with ID for both movements. However, the data could not be fitted with a single regression: Movements over the long distance had a larger intercept corresponding to movement times about 140 ms longer than movements over the shorter distance. The magnitude of postural adjustments prior to movement initiation scaled with ID for both short and long distances. Our results provide strong support for the hypothesis that Fitts’ Law emerges at the level of motor planning, not at the level of corrections of ongoing movements. They show that, during natural movements, changes in movement distance may lead to changes in the relation between movement time and ID, for example when the contribution of different body segments to the movement varies and when the action of Coriolis force may require an additional correction of the movement trajectory. PMID:23211560

  10. Geodesic least squares regression on information manifolds

    SciTech Connect

    Verdoolaege, Geert

    2014-12-05

    We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.

  11. Quantile regression applied to spectral distance decay

    USGS Publications Warehouse

    Rocchini, D.; Cade, B.S.

    2008-01-01

    Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

  12. Image segmentation via piecewise constant regression

    NASA Astrophysics Data System (ADS)

    Acton, Scott T.; Bovik, Alan C.

    1994-09-01

    We introduce a novel unsupervised image segmentation technique that is based on piecewise constant (PICO) regression. Given an input image, a PICO output image for a specified feature size (scale) is computed via nonlinear regression. The regression effectively provides the constant region segmentation of the input image that has a minimum deviation from the input image. PICO regression-based segmentation avoids the problems of region merging, poor localization, region boundary ambiguity, and region fragmentation. Additionally, our segmentation method is particularly well-suited for corrupted (noisy) input data. An application to segmentation and classification of remotely sensed imagery is provided.

  13. Hybrid fuzzy regression with trapezoidal fuzzy data

    NASA Astrophysics Data System (ADS)

    Razzaghnia, T.; Danesh, S.; Maleki, A.

    2011-12-01

    In this regard, this research deals with a method for hybrid fuzzy least-squares regression. The extension of symmetric triangular fuzzy coefficients to asymmetric trapezoidal fuzzy coefficients is considered as an effective measure for removing unnecessary fuzziness of the linear fuzzy model. First, trapezoidal fuzzy variable is applied to derive a bivariate regression model. In the following, normal equations are formulated to solve the four parts of hybrid regression coefficients. Also the model is extended to multiple regression analysis. Eventually, method is compared with Y-H.O. chang's model.

  14. Executive functions, depressive symptoms, and college adjustment in women.

    PubMed

    Wingo, Jana; Kalkut, Erica; Tuminello, Elizabeth; Asconape, Josefina; Han, S Duke

    2013-01-01

    Many students have difficulty adjusting to college, and the contribution of academic and relational factors have been considered in previous research. In particular, depression commonly emerges among college women at this time and could be related to poor adjustment to college. This study examined the relationship between executive functions, depressive symptoms, and college adjustment in college women. Seventy-seven female participants from a midsize urban university completed the Wechsler Abbreviated Scale of Intelligence, College Adjustment Scale, Beck Depression Inventory-Second Edition, Behavior Rating Inventory of Executive Function-Adult Version, and four subtests from the Delis-Kaplan Executive Function System: the Trail-Making Test, Design Fluency Test, Verbal Fluency Test, and Color-Word Interference Test. After controlling for IQ score, hierarchical regression analyses showed that subjective and objective measures of executive functioning and depressive symptoms were significantly related to college adjustment problems in academic, relational, and psychological areas. The current study provides evidence for a relationship between cognitive abilities, psychiatric symptoms, and college adjustment. PMID:23397999

  15. Exploring Mexican American adolescent romantic relationship profiles and adjustment.

    PubMed

    Moosmann, Danyel A V; Roosa, Mark W

    2015-08-01

    Although Mexican Americans are the largest ethnic minority group in the nation, knowledge is limited regarding this population's adolescent romantic relationships. This study explored whether 12th grade Mexican Americans' (N = 218; 54% female) romantic relationship characteristics, cultural values, and gender created unique latent classes and if so, whether they were linked to adjustment. Latent class analyses suggested three profiles including, relatively speaking, higher, satisfactory, and lower quality romantic relationships. Regression analyses indicated these profiles had distinct associations with adjustment. Specifically, adolescents with higher and satisfactory quality romantic relationships reported greater future family expectations, higher self-esteem, and fewer externalizing symptoms than those with lower quality romantic relationships. Similarly, adolescents with higher quality romantic relationships reported greater academic self-efficacy and fewer sexual partners than those with lower quality romantic relationships. Overall, results suggested higher quality romantic relationships were most optimal for adjustment. Future research directions and implications are discussed. PMID:26141198

  16. Integrating Risk Adjustment and Enrollee Premiums in Health Plan Payment

    PubMed Central

    McGuire, Thomas G.; Glazer, Jacob; Newhouse, Joseph P.; Normand, Sharon-Lise; Shi, Julie; Sinaiko, Anna D.; Zuvekas, Samuel

    2013-01-01

    In two important health policy contexts – private plans in Medicare and the new state-run “Exchanges” created as part of the Affordable Care Act (ACA) – plan payments come from two sources: risk-adjusted payments from a Regulator and premiums charged to individual enrollees. This paper derives principles for integrating risk-adjusted payments and premium policy in individual health insurance markets based on fitting total plan payments to health plan costs per person as closely as possible. A least squares regression including both health status and variables used in premiums reveals the weights a Regulator should put on risk adjusters when markets determine premiums. We apply the methods to an Exchange-eligible population drawn from the Medical Expenditure Panel Survey (MEPS). PMID:24308878

  17. Integrating risk adjustment and enrollee premiums in health plan payment.

    PubMed

    McGuire, Thomas G; Glazer, Jacob; Newhouse, Joseph P; Normand, Sharon-Lise; Shi, Julie; Sinaiko, Anna D; Zuvekas, Samuel H

    2013-12-01

    In two important health policy contexts - private plans in Medicare and the new state-run "Exchanges" created as part of the Affordable Care Act (ACA) - plan payments come from two sources: risk-adjusted payments from a Regulator and premiums charged to individual enrollees. This paper derives principles for integrating risk-adjusted payments and premium policy in individual health insurance markets based on fitting total plan payments to health plan costs per person as closely as possible. A least squares regression including both health status and variables used in premiums reveals the weights a Regulator should put on risk adjusters when markets determine premiums. We apply the methods to an Exchange-eligible population drawn from the Medical Expenditure Panel Survey (MEPS). PMID:24308878

  18. Detecting Randomness: the Sensitivity of Statistical Tests to Deviations from a Constant Rate Poisson Process

    NASA Astrophysics Data System (ADS)

    Michael, A. J.

    2012-12-01

    Detecting trends in the rate of sporadic events is a problem for earthquakes and other natural hazards such as storms, floods, or landslides. I use synthetic events to judge the tests used to address this problem in seismology and consider their application to other hazards. Recent papers have analyzed the record of magnitude ≥7 earthquakes since 1900 and concluded that the events are consistent with a constant rate Poisson process plus localized aftershocks (Michael, GRL, 2011; Shearer and Stark, PNAS, 2012; Daub et al., GRL, 2012; Parsons and Geist, BSSA, 2012). Each paper removed localized aftershocks and then used a different suite of statistical tests to test the null hypothesis that the remaining data could be drawn from a constant rate Poisson process. The methods include KS tests between event times or inter-event times and predictions from a Poisson process, the autocorrelation function on inter-event times, and two tests on the number of events in time bins: the Poisson dispersion test and the multinomial chi-square test. The range of statistical tests gives us confidence in the conclusions; which are robust with respect to the choice of tests and parameters. But which tests are optimal and how sensitive are they to deviations from the null hypothesis? The latter point was raised by Dimer (arXiv, 2012), who suggested that the lack of consideration of Type 2 errors prevents these papers from being able to place limits on the degree of clustering and rate changes that could be present in the global seismogenic process. I produce synthetic sets of events that deviate from a constant rate Poisson process using a variety of statistical simulation methods including Gamma distributed inter-event times and random walks. The sets of synthetic events are examined with the statistical tests described above. Preliminary results suggest that with 100 to 1000 events, a data set that does not reject the Poisson null hypothesis could have a variability that is 30% to

  19. Regression calibration method for correcting measurement-error bias in nutritional epidemiology.

    PubMed

    Spiegelman, D; McDermott, A; Rosner, B

    1997-04-01

    Regression calibration is a statistical method for adjusting point and interval estimates of effect obtained from regression models commonly used in epidemiology for bias due to measurement error in assessing nutrients or other variables. Previous work developed regression calibration for use in estimating odds ratios from logistic regression. We extend this here to estimating incidence rate ratios from Cox proportional hazards models and regression slopes from linear-regression models. Regression calibration is appropriate when a gold standard is available in a validation study and a linear measurement error with constant variance applies or when replicate measurements are available in a reliability study and linear random within-person error can be assumed. In this paper, the method is illustrated by correction of rate ratios describing the relations between the incidence of breast cancer and dietary intakes of vitamin A, alcohol, and total energy in the Nurses' Health Study. An example using linear regression is based on estimation of the relation between ultradistal radius bone density and dietary intakes of caffeine, calcium, and total energy in the Massachusetts Women's Health Study. Software implementing these methods uses SAS macros. PMID:9094918

  20. Deriving the Regression Equation without Using Calculus

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    2004-01-01

    Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…

  1. Regression Analysis and the Sociological Imagination

    ERIC Educational Resources Information Center

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  2. Illustration of Regression towards the Means

    ERIC Educational Resources Information Center

    Govindaraju, K.; Haslett, S. J.

    2008-01-01

    This article presents a procedure for generating a sequence of data sets which will yield exactly the same fitted simple linear regression equation y = a + bx. Unless rescaled, the generated data sets will have progressively smaller variability for the two variables, and the associated response and covariate will "regress" towards their…

  3. Stepwise versus Hierarchical Regression: Pros and Cons

    ERIC Educational Resources Information Center

    Lewis, Mitzi

    2007-01-01

    Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…

  4. Cross-Validation, Shrinkage, and Multiple Regression.

    ERIC Educational Resources Information Center

    Hynes, Kevin

    One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…

  5. Principles of Quantile Regression and an Application

    ERIC Educational Resources Information Center

    Chen, Fang; Chalhoub-Deville, Micheline

    2014-01-01

    Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

  6. Regression Analysis: Legal Applications in Institutional Research

    ERIC Educational Resources Information Center

    Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.

    2008-01-01

    This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…

  7. Dealing with Outliers: Robust, Resistant Regression

    ERIC Educational Resources Information Center

    Glasser, Leslie

    2007-01-01

    Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…

  8. A Practical Guide to Regression Discontinuity

    ERIC Educational Resources Information Center

    Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard

    2012-01-01

    Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…

  9. Sulphasalazine and regression of rheumatoid nodules.

    PubMed

    Englert, H J; Hughes, G R; Walport, M J

    1987-03-01

    The regression of small rheumatoid nodules was noted in four patients after starting sulphasalazine therapy. This coincided with an improvement in synovitis and also falls in erythrocyte sedimentation rate (ESR) and C reactive protein (CRP). The relation between the nodule regression and the sulphasalazine therapy is discussed. PMID:2883940

  10. A Simulation Investigation of Principal Component Regression.

    ERIC Educational Resources Information Center

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  11. Three-Dimensional Modeling in Linear Regression.

    ERIC Educational Resources Information Center

    Herman, James D.

    Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…

  12. Higher order asymptotics for negative binomial regression inferences from RNA-sequencing data.

    PubMed

    Di, Yanming; Emerson, Sarah C; Schafer, Daniel W; Kimbrel, Jeffrey A; Chang, Jeff H

    2013-03-01

    RNA sequencing (RNA-Seq) is the current method of choice for characterizing transcriptomes and quantifying gene expression changes. This next generation sequencing-based method provides unprecedented depth and resolution. The negative binomial (NB) probability distribution has been shown to be a useful model for frequencies of mapped RNA-Seq reads and consequently provides a basis for statistical analysis of gene expression. Negative binomial exact tests are available for two-group comparisons but do not extend to negative binomial regression analysis, which is important for examining gene expression as a function of explanatory variables and for adjusted group comparisons accounting for other factors. We address the adequacy of available large-sample tests for the small sample sizes typically available from RNA-Seq studies and consider a higher-order asymptotic (HOA) adjustment to likelihood ratio tests. We demonstrate that 1) the HOA-adjusted likelihood ratio test is practically indistinguishable from the exact test in situations where the exact test is available, 2) the type I error of the HOA test matches the nominal specification in regression settings we examined via simulation, and 3) the power of the likelihood ratio test does not appear to be affected by the HOA adjustment. This work helps clarify the accuracy of the unadjusted likelihood ratio test and the degree of improvement available with the HOA adjustment. Furthermore, the HOA test may be preferable even when the exact test is available because it does not require ad hoc library size adjustments. PMID:23502340

  13. To adjust or not to adjust for baseline when analyzing repeated binary responses? The case of complete data when treatment comparison at study end is of interest.

    PubMed

    Jiang, Honghua; Kulkarni, Pandurang M; Mallinckrodt, Craig H; Shurzinske, Linda; Molenberghs, Geert; Lipkovich, Ilya

    2015-01-01

    The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi-square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi-squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. PMID:25866149

  14. Adjustable Induction-Heating Coil

    NASA Technical Reports Server (NTRS)

    Ellis, Rod; Bartolotta, Paul

    1990-01-01

    Improved design for induction-heating work coil facilitates optimization of heating in different metal specimens. Three segments adjusted independently to obtain desired distribution of temperature. Reduces time needed to achieve required temperature profiles.

  15. Understanding the changes in ductility and Poisson's ratio of metallic glasses during annealing from microscopic dynamics

    SciTech Connect

    Wang, Z.; Ngai, K. L.; Wang, W. H.

    2015-07-21

    In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, ν{sub Poisson}, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari–Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye–Waller factor f{sub 0} or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f{sub 0} or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La{sub 60}Ni{sub 15}Al{sub 25} as-cast, and the changes by annealing at temperature below T{sub g}. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f{sub 0}, correlates with the decrease of ν{sub Poisson}. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with ν{sub Poisson}. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down

  16. A Fast Poisson Solver with Periodic Boundary Conditions for GPU Clusters in Various Configurations

    NASA Astrophysics Data System (ADS)

    Rattermann, Dale Nicholas

    Fast Poisson solvers using the Fast Fourier Transform on uniform grids are especially suited for parallel implementation, making them appropriate for portability on graphical processing unit (GPU) devices. The goal of the following work was to implement, test, and evaluate a fast Poisson solver for periodic boundary conditions for use on a variety of GPU configurations. The solver used in this research was FLASH, an immersed-boundary-based method, which is well suited for complex, time-dependent geometries, has robust adaptive mesh refinement/de-refinement capabilities to capture evolving flow structures, and has been successfully implemented on conventional, parallel supercomputers. However, these solvers are still computationally costly to employ, and the total solver time is dominated by the solution of the pressure Poisson equation using state-of-the-art multigrid methods. FLASH improves the performance of its multigrid solvers by integrating a parallel FFT solver on a uniform grid during a coarse level. This hybrid solver could then be theoretically improved by replacing the highly-parallelizable FFT solver with one that utilizes GPUs, and, thus, was the motivation for my research. In the present work, the CPU-utilizing parallel FFT solver (PFFT) used in the base version of FLASH for solving the Poisson equation on uniform grids has been modified to enable parallel execution on CUDA-enabled GPU devices. New algorithms have been implemented to replace the Poisson solver that decompose the computational domain and send each new block to a GPU for parallel computation. One-dimensional (1-D) decomposition of the computational domain minimizes the amount of network traffic involved in this bandwidth-intensive computation by limiting the amount of all-to-all communication required between processes. Advanced techniques have been incorporated and implemented in a GPU-centric code design, while allowing end users the flexibility of parameter control at runtime in

  17. The mechanical influences of the graded distribution in the cross-sectional shape, the stiffness and Poisson׳s ratio of palm branches.

    PubMed

    Liu, Wangyu; Wang, Ningling; Jiang, Xiaoyong; Peng, Yujian

    2016-07-01

    The branching system plays an important role in maintaining the survival of palm trees. Due to the nature of monocots, no additional vascular bundles can be added in the palm tree tissue as it ages. Therefore, the changing of the cross-sectional area in the palm branch creates a graded distribution in the mechanical properties of the tissue. In the present work, this graded distribution in the tissue mechanical properties from sheath to petiole were studied with a multi-scale modeling approach. Then, the entire palm branch was reconstructed and analyzed using finite element methods. The variation of the elastic modulus can lower the level of mechanical stress in the sheath and also allow the branch to have smaller values of pressure on the other branches. Under impact loading, the enhanced frictional dissipation at the surfaces of adjacent branches benefits from the large Poisson׳s ratio of the sheath tissue. These findings can help to link the wind resistance ability of palm trees to their graded materials distribution in the branching system. PMID:26807774

  18. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  19. A regression tree approach to identifying subgroups with differential treatment effects.

    PubMed

    Loh, Wei-Yin; He, Xu; Man, Michael

    2015-05-20

    In the fight against hard-to-treat diseases such as cancer, it is often difficult to discover new treatments that benefit all subjects. For regulatory agency approval, it is more practical to identify subgroups of subjects for whom the treatment has an enhanced effect. Regression trees are natural for this task because they partition the data space. We briefly review existing regression tree algorithms. Then, we introduce three new ones that are practically free of selection bias and are applicable to data from randomized trials with two or more treatments, censored response variables, and missing values in the predictor variables. The algorithms extend the generalized unbiased interaction detection and estimation (GUIDE) approach by using three key ideas: (i) treatment as a linear predictor, (ii) chi-squared tests to detect residual patterns and lack of fit, and (iii) proportional hazards modeling via Poisson regression. Importance scores with thresholds for identifying influential variables are obtained as by-products. A bootstrap technique is used to construct confidence intervals for the treatment effects in each node. The methods are compared using real and simulated data. PMID:25656439

  20. A regression tree approach to identifying subgroups with differential treatment effects

    PubMed Central

    Loh, Wei-Yin; He, Xu; Man, Michael

    2015-01-01

    In the fight against hard-to-treat diseases such as cancer, it is often difficult to discover new treatments that benefit all subjects. For regulatory agency approval, it is more practical to identify subgroups of subjects for whom the treatment has an enhanced effect. Regression trees are natural for this task because they partition the data space. We briefly review existing regression tree algorithms. Then we introduce three new ones that are practically free of selection bias and are applicable to data from randomized trials with two or more treatments, censored response variables, and missing values in the predictor variables. The algorithms extend the GUIDE approach by using three key ideas: (i) treatment as a linear predictor, (ii) chi-squared tests to detect residual patterns and lack of fit, and (iii) proportional hazards modeling via Poisson regression. Importance scores with thresholds for identifying influential variables are obtained as by-products. A bootstrap technique is used to construct confidence intervals for the treatment effects in each node. The methods are compared using real and simulated data. PMID:25656439