Multilevel Models for Binary Data
ERIC Educational Resources Information Center
Powers, Daniel A.
2012-01-01
The methods and models for categorical data analysis cover considerable ground, ranging from regression-type models for binary and binomial data, count data, to ordered and unordered polytomous variables, as well as regression models that mix qualitative and continuous data. This article focuses on methods for binary or binomial data, which are…
Flexible link functions in nonparametric binary regression with Gaussian process priors.
Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K
2016-09-01
In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.
Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors
Li, Dan; Lin, Lizhen; Dey, Dipak K.
2015-01-01
Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333
Binary logistic regression-Instrument for assessing museum indoor air impact on exhibits.
Bucur, Elena; Danet, Andrei Florin; Lehr, Carol Blaziu; Lehr, Elena; Nita-Lazar, Mihai
2017-04-01
This paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The prediction of the impact on the exhibits during certain pollution scenarios (environmental impact) was calculated by a mathematical model based on the binary logistic regression; it allows the identification of those environmental parameters from a multitude of possible parameters with a significant impact on exhibitions and ranks them according to their severity effect. Air quality (NO 2 , SO 2 , O 3 and PM 2.5 ) and microclimate parameters (temperature, humidity) monitoring data from a case study conducted within exhibition and storage spaces of the Romanian National Aviation Museum Bucharest have been used for developing and validating the binary logistic regression method and the mathematical model. The logistic regression analysis was used on 794 data combinations (715 to develop of the model and 79 to validate it) by a Statistical Package for Social Sciences (SPSS 20.0). The results from the binary logistic regression analysis demonstrated that from six parameters taken into consideration, four of them present a significant effect upon exhibits in the following order: O 3 >PM 2.5 >NO 2 >humidity followed at a significant distance by the effects of SO 2 and temperature. The mathematical model, developed in this study, correctly predicted 95.1 % of the cumulated effect of the environmental parameters upon the exhibits. Moreover, this model could also be used in the decisional process regarding the preventive preservation measures that should be implemented within the exhibition space. The paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The mathematical model developed on the environmental parameters analyzed by the binary logistic regression method could be useful in a decision-making process establishing the best measures for pollution reduction and preventive preservation of exhibits.
Preserving Institutional Privacy in Distributed binary Logistic Regression.
Wu, Yuan; Jiang, Xiaoqian; Ohno-Machado, Lucila
2012-01-01
Privacy is becoming a major concern when sharing biomedical data across institutions. Although methods for protecting privacy of individual patients have been proposed, it is not clear how to protect the institutional privacy, which is many times a critical concern of data custodians. Built upon our previous work, Grid Binary LOgistic REgression (GLORE)1, we developed an Institutional Privacy-preserving Distributed binary Logistic Regression model (IPDLR) that considers both individual and institutional privacy for building a logistic regression model in a distributed manner. We tested our method using both simulated and clinical data, showing how it is possible to protect the privacy of individuals and of institutions using a distributed strategy.
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Predicting the occurrence of wildfires with binary structured additive regression models.
Ríos-Pena, Laura; Kneib, Thomas; Cadarso-Suárez, Carmen; Marey-Pérez, Manuel
2017-02-01
Wildfires are one of the main environmental problems facing societies today, and in the case of Galicia (north-west Spain), they are the main cause of forest destruction. This paper used binary structured additive regression (STAR) for modelling the occurrence of wildfires in Galicia. Binary STAR models are a recent contribution to the classical logistic regression and binary generalized additive models. Their main advantage lies in their flexibility for modelling non-linear effects, while simultaneously incorporating spatial and temporal variables directly, thereby making it possible to reveal possible relationships among the variables considered. The results showed that the occurrence of wildfires depends on many covariates which display variable behaviour across space and time, and which largely determine the likelihood of ignition of a fire. The joint possibility of working on spatial scales with a resolution of 1 × 1 km cells and mapping predictions in a colour range makes STAR models a useful tool for plotting and predicting wildfire occurrence. Lastly, it will facilitate the development of fire behaviour models, which can be invaluable when it comes to drawing up fire-prevention and firefighting plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins. PMID:27418910
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2005-01-01
Logistic and Cox regression methods are practical tools used to model the relationships between certain student learning outcomes and their relevant explanatory variables. The logistic regression model fits an S-shaped curve into a binary outcome with data points of zero and one. The Cox regression model allows investigators to study the duration…
ERIC Educational Resources Information Center
Monahan, Patrick O.; McHorney, Colleen A.; Stump, Timothy E.; Perkins, Anthony J.
2007-01-01
Previous methodological and applied studies that used binary logistic regression (LR) for detection of differential item functioning (DIF) in dichotomously scored items either did not report an effect size or did not employ several useful measures of DIF magnitude derived from the LR model. Equations are provided for these effect size indices.…
Kupek, Emil
2006-03-15
Structural equation modelling (SEM) has been increasingly used in medical statistics for solving a system of related regression equations. However, a great obstacle for its wider use has been its difficulty in handling categorical variables within the framework of generalised linear models. A large data set with a known structure among two related outcomes and three independent variables was generated to investigate the use of Yule's transformation of odds ratio (OR) into Q-metric by (OR-1)/(OR+1) to approximate Pearson's correlation coefficients between binary variables whose covariance structure can be further analysed by SEM. Percent of correctly classified events and non-events was compared with the classification obtained by logistic regression. The performance of SEM based on Q-metric was also checked on a small (N = 100) random sample of the data generated and on a real data set. SEM successfully recovered the generated model structure. SEM of real data suggested a significant influence of a latent confounding variable which would have not been detectable by standard logistic regression. SEM classification performance was broadly similar to that of the logistic regression. The analysis of binary data can be greatly enhanced by Yule's transformation of odds ratios into estimated correlation matrix that can be further analysed by SEM. The interpretation of results is aided by expressing them as odds ratios which are the most frequently used measure of effect in medical statistics.
Guo, Canyong; Luo, Xuefang; Zhou, Xiaohua; Shi, Beijia; Wang, Juanjuan; Zhao, Jinqi; Zhang, Xiaoxia
2017-06-05
Vibrational spectroscopic techniques such as infrared, near-infrared and Raman spectroscopy have become popular in detecting and quantifying polymorphism of pharmaceutics since they are fast and non-destructive. This study assessed the ability of three vibrational spectroscopy combined with multivariate analysis to quantify a low-content undesired polymorph within a binary polymorphic mixture. Partial least squares (PLS) regression and support vector machine (SVM) regression were employed to build quantitative models. Fusidic acid, a steroidal antibiotic, was used as the model compound. It was found that PLS regression performed slightly better than SVM regression in all the three spectroscopic techniques. Root mean square errors of prediction (RMSEP) were ranging from 0.48% to 1.17% for diffuse reflectance FTIR spectroscopy and 1.60-1.93% for diffuse reflectance FT-NIR spectroscopy and 1.62-2.31% for Raman spectroscopy. The results indicate that diffuse reflectance FTIR spectroscopy offers significant advantages in providing accurate measurement of polymorphic content in the fusidic acid binary mixtures, while Raman spectroscopy is the least accurate technique for quantitative analysis of polymorphs. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.
2014-07-01
Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.
Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.
2009-01-01
Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358
Introduction to the use of regression models in epidemiology.
Bender, Ralf
2009-01-01
Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.
ERIC Educational Resources Information Center
Davidson, J. Cody
2016-01-01
Mathematics is the most common subject area of remedial need and the majority of remedial math students never pass a college-level credit-bearing math class. The majorities of studies that investigate this phenomenon are conducted at community colleges and use some type of regression model; however, none have used a continuation ratio model. The…
NASA Astrophysics Data System (ADS)
Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.
2013-02-01
Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local parameter estimates for all the variables and an important reduction of the autocorrelation in the residuals of the GW linear model. Despite the fitting improvement of local models, GW regression, more than an alternative to "global" or traditional regression modelling, seems to be a valuable complement to explore the non-stationary relationships between the response variable and the explanatory variables. The synergy of global and local modelling provides insights into fire management and policy and helps further our understanding of the fire problem over large areas while at the same time recognizing its local character.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
The intermediate endpoint effect in logistic and probit regression
MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM
2010-01-01
Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted conclusions regarding the intermediate effect. PMID:17942466
NASA Astrophysics Data System (ADS)
Bouffon, T.; Rice, R.; Bales, R.
2006-12-01
The spatial distributions of snow water equivalent (SWE) and snow depth within a 1, 4, and 16 km2 grid element around two automated snow pillows in a forested and open- forested region of the Upper Merced River Basin (2,800 km2) of Yosemite National Park were characterized using field observations and analyzed using binary regression trees. Snow surveys occurred at the forested site during the accumulation and ablation seasons, while at the open-forest site a survey was performed only during the accumulation season. An average of 130 snow depth and 7 snow density measurements were made on each survey, within the 4 km2 grid. Snow depth was distributed using binary regression trees and geostatistical methods using the physiographic parameters (e.g. elevation, slope, vegetation, aspect). Results in the forest region indicate that the snow pillow overestimated average SWE within the 1, 4, and 16 km2 areas by 34 percent during ablation, but during accumulation the snow pillow provides a good estimate of the modeled mean SWE grid value, however it is suspected that the snow pillow was underestimating SWE. However, at the open forest site, during accumulation, the snow pillow was 28 percent greater than the mean modeled grid element. In addition, the binary regression trees indicate that the independent variables of vegetation, slope, and aspect are the most influential parameters of snow depth distribution. The binary regression tree and multivariate linear regression models explain about 60 percent of the initial variance for snow depth and 80 percent for density, respectively. This short-term study provides motivation and direction for the installation of a distributed snow measurement network to fill the information gap in basin-wide SWE and snow depth measurements. Guided by these results, a distributed snow measurement network was installed in the Fall 2006 at Gin Flat in the Upper Merced River Basin with the specific objective of measuring accumulation and ablation across topographic variables with the aim of providing guidance for future larger scale observation network designs.
Multiple Logistic Regression Analysis of Cigarette Use among High School Students
ERIC Educational Resources Information Center
Adwere-Boamah, Joseph
2011-01-01
A binary logistic regression analysis was performed to predict high school students' cigarette smoking behavior from selected predictors from 2009 CDC Youth Risk Behavior Surveillance Survey. The specific target student behavior of interest was frequent cigarette use. Five predictor variables included in the model were: a) race, b) frequency of…
Szekér, Szabolcs; Vathy-Fogarassy, Ágnes
2018-01-01
Logistic regression based propensity score matching is a widely used method in case-control studies to select the individuals of the control group. This method creates a suitable control group if all factors affecting the output variable are known. However, if relevant latent variables exist as well, which are not taken into account during the calculations, the quality of the control group is uncertain. In this paper, we present a statistics-based research in which we try to determine the relationship between the accuracy of the logistic regression model and the uncertainty of the dependent variable of the control group defined by propensity score matching. Our analyses show that there is a linear correlation between the fit of the logistic regression model and the uncertainty of the output variable. In certain cases, a latent binary explanatory variable can result in a relative error of up to 70% in the prediction of the outcome variable. The observed phenomenon calls the attention of analysts to an important point, which must be taken into account when deducting conclusions.
Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models
Rice, John D.; Taylor, Jeremy M. G.
2016-01-01
One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492
Li, Chengxian; Huang, Zhe; Huang, Bicheng; Liu, Changfeng; Li, Chengming; Huang, Yaqin
2014-01-01
Cr(VI) adsorption in a binary mixture Cr(VI)-Ni(II) using the hierarchical porous carbon prepared from pig bone (HPC) was investigated. The various factors affecting adsorption of Cr(VI) ions from aqueous solutions such as initial concentration, pH, temperature and contact time were analyzed. The results showed excellent efficiency of Cr(VI) adsorption by HPC. The kinetics and isotherms for Cr(VI) adsorption from a binary mixture Cr(VI)-Ni(II) by HPC were studied. The adsorption equilibrium described by the Langmuir isotherm model is better than that described by the Freundlich isotherm model for the binary mixture in this study. The maximum adsorption capacity was reliably found to be as high as 192.68 mg/g in the binary mixture at pH 2. On fitting the experimental data to both pseudo-first- and second-order equations, the regression analysis of the second-order equation gave a better R² value.
Tang, Yongqiang
2018-04-30
The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.
Muddukrishna, B S; Pai, Vasudev; Lobo, Richard; Pai, Aravinda
2017-11-22
In the present study, five important binary fingerprinting techniques were used to model novel flavones for the selective inhibition of Tankyrase I. From the fingerprints used: the fingerprint atom pairs resulted in a statistically significant 2D QSAR model using a kernel-based partial least square regression method. This model indicates that the presence of electron-donating groups positively contributes to activity, whereas the presence of electron withdrawing groups negatively contributes to activity. This model could be used to develop more potent as well as selective analogues for the inhibition of Tankyrase I. Schematic representation of 2D QSAR work flow.
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
NASA Astrophysics Data System (ADS)
Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.
2008-06-01
In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.
Henrard, S; Speybroeck, N; Hermans, C
2015-11-01
Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.
Retargeted Least Squares Regression Algorithm.
Zhang, Xu-Yao; Wang, Lingfeng; Xiang, Shiming; Liu, Cheng-Lin
2015-09-01
This brief presents a framework of retargeted least squares regression (ReLSR) for multicategory classification. The core idea is to directly learn the regression targets from data other than using the traditional zero-one matrix as regression targets. The learned target matrix can guarantee a large margin constraint for the requirement of correct classification for each data point. Compared with the traditional least squares regression (LSR) and a recently proposed discriminative LSR models, ReLSR is much more accurate in measuring the classification error of the regression model. Furthermore, ReLSR is a single and compact model, hence there is no need to train two-class (binary) machines that are independent of each other. The convex optimization problem of ReLSR is solved elegantly and efficiently with an alternating procedure including regression and retargeting as substeps. The experimental evaluation over a range of databases identifies the validity of our method.
Logistic regression for circular data
NASA Astrophysics Data System (ADS)
Al-Daffaie, Kadhem; Khan, Shahjahan
2017-05-01
This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.
A new method for constructing networks from binary data
NASA Astrophysics Data System (ADS)
van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.
2014-08-01
Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Logic regression and its extensions.
Schwender, Holger; Ruczinski, Ingo
2010-01-01
Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.
Determination of riverbank erosion probability using Locally Weighted Logistic Regression
NASA Astrophysics Data System (ADS)
Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos
2015-04-01
Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
Goltz, Annemarie; Janowitz, Deborah; Hannemann, Anke; Nauck, Matthias; Hoffmann, Johanna; Seyfart, Tom; Völzke, Henry; Terock, Jan; Grabe, Hans Jörgen
2018-06-19
Depression and obesity are widespread and closely linked. Brain-derived neurotrophic factor (BDNF) and vitamin D are both assumed to be associated with depression and obesity. Little is known about the interplay between vitamin D and BDNF. We explored the putative associations and interactions between serum BDNF and vitamin D levels with depressive symptoms and abdominal obesity in a large population-based cohort. Data were obtained from the population-based Study of Health in Pomerania (SHIP)-Trend (n = 3,926). The associations of serum BDNF and vitamin D levels with depressive symptoms (measured using the Patient Health Questionnaire) were assessed with binary and multinomial logistic regression models. The associations of serum BDNF and vitamin D levels with obesity (measured by the waist-to-hip ratio [WHR]) were assessed with binary logistic and linear regression models with restricted cubic splines. Logistic regression models revealed inverse associations of vitamin D with depression (OR = 0.966; 95% CI 0.951-0.981) and obesity (OR = 0.976; 95% CI 0.967-0.985). No linear association of serum BDNF with depression or obesity was found. However, linear regression models revealed a U-shaped association of BDNF with WHR (p < 0.001). Vitamin D was inversely associated with depression and obesity. BDNF was associated with abdominal obesity, but not with depression. At the population level, our results support the relevant roles of vitamin D and BDNF in mental and physical health-related outcomes. © 2018 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Wulandari, S. P.; Salamah, M.; Rositawati, A. F. D.
2018-04-01
Food security is the condition where the food fulfilment is managed well for the country till the individual. Indonesia is one of the country which has the commitment to create the food security becomes main priority. However, the food necessity becomes common thing means that it doesn’t care about nutrient standard and the health condition of family member, so in the fulfilment of food necessity also has to consider the disease suffered by the family member, one of them is pulmonary tuberculosa. From that reasons, this research is conducted to know the factors which influence on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya by using binary logistic regression method. The analysis result by using binary logistic regression shows that the variables wife latest education, house density and spacious house ventilation significantly affect on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya, where the wife education level is University/equivalent, the house density is eligible or 8 m2/person and spacious house ventilation 10% of the floor area has the opportunity to become food secure households amounted to 0.911089. While the chance of becoming food insecure households amounted to 0.088911. The model household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya has been conformable, and the overall percentages of those classifications are at 71.8%.
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Nourani, Vahid; Hrnjica, Bahrudin; Molajou, Amir
2017-12-01
The effectiveness of genetic programming (GP) for solving regression problems in hydrology has been recognized in recent studies. However, its capability to solve classification problems has not been sufficiently explored so far. This study develops and applies a novel classification-forecasting model, namely Binary GP (BGP), for teleconnection studies between sea surface temperature (SST) variations and maximum monthly rainfall (MMR) events. The BGP integrates certain types of data pre-processing and post-processing methods with conventional GP engine to enhance its ability to solve both regression and classification problems simultaneously. The model was trained and tested using SST series of Black Sea, Mediterranean Sea, and Red Sea as potential predictors as well as classified MMR events at two locations in Iran as predictand. Skill of the model was measured in regard to different rainfall thresholds and SST lags and compared to that of the hybrid decision tree-association rule (DTAR) model available in the literature. The results indicated that the proposed model can identify potential teleconnection signals of surrounding seas beneficial to long-term forecasting of the occurrence of the classified MMR events.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Lim, Jongguk; Kim, Giyoung; Mo, Changyeun; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Fu, Xiaping; Baek, Insuck; Cho, Byoung-Kwan
2016-05-01
Illegal use of nitrogen-rich melamine (C3H6N6) to boost perceived protein content of food products such as milk, infant formula, frozen yogurt, pet food, biscuits, and coffee drinks has caused serious food safety problems. Conventional methods to detect melamine in foods, such as Enzyme-linked immunosorbent assay (ELISA), High-performance liquid chromatography (HPLC), and Gas chromatography-mass spectrometry (GC-MS), are sensitive but they are time-consuming, expensive, and labor-intensive. In this research, near-infrared (NIR) hyperspectral imaging technique combined with regression coefficient of partial least squares regression (PLSR) model was used to detect melamine particles in milk powders easily and quickly. NIR hyperspectral reflectance imaging data in the spectral range of 990-1700nm were acquired from melamine-milk powder mixture samples prepared at various concentrations ranging from 0.02% to 1%. PLSR models were developed to correlate the spectral data (independent variables) with melamine concentration (dependent variables) in melamine-milk powder mixture samples. PLSR models applying various pretreatment methods were used to reconstruct the two-dimensional PLS images. PLS images were converted to the binary images to detect the suspected melamine pixels in milk powder. As the melamine concentration was increased, the numbers of suspected melamine pixels of binary images were also increased. These results suggested that NIR hyperspectral imaging technique and the PLSR model can be regarded as an effective tool to detect melamine particles in milk powders. Copyright © 2016 Elsevier B.V. All rights reserved.
HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION
Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong
2015-01-01
In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645
Terza, Joseph V; Bradford, W David; Dismuke, Clara E
2008-01-01
Objective To investigate potential bias in the use of the conventional linear instrumental variables (IV) method for the estimation of causal effects in inherently nonlinear regression settings. Data Sources Smoking Supplement to the 1979 National Health Interview Survey, National Longitudinal Alcohol Epidemiologic Survey, and simulated data. Study Design Potential bias from the use of the linear IV method in nonlinear models is assessed via simulation studies and real world data analyses in two commonly encountered regression setting: (1) models with a nonnegative outcome (e.g., a count) and a continuous endogenous regressor; and (2) models with a binary outcome and a binary endogenous regressor. Principle Findings The simulation analyses show that substantial bias in the estimation of causal effects can result from applying the conventional IV method in inherently nonlinear regression settings. Moreover, the bias is not attenuated as the sample size increases. This point is further illustrated in the survey data analyses in which IV-based estimates of the relevant causal effects diverge substantially from those obtained with appropriate nonlinear estimation methods. Conclusions We offer this research as a cautionary note to those who would opt for the use of linear specifications in inherently nonlinear settings involving endogeneity. PMID:18546544
Risk of Recurrence in Operated Parasagittal Meningiomas: A Logistic Binary Regression Model.
Escribano Mesa, José Alberto; Alonso Morillejo, Enrique; Parrón Carreño, Tesifón; Huete Allut, Antonio; Narro Donate, José María; Méndez Román, Paddy; Contreras Jiménez, Ascensión; Pedrero García, Francisco; Masegosa González, José
2018-02-01
Parasagittal meningiomas arise from the arachnoid cells of the angle formed between the superior sagittal sinus (SSS) and the brain convexity. In this retrospective study, we focused on factors that predict early recurrence and recurrence times. We reviewed 125 patients with parasagittal meningiomas operated from 1985 to 2014. We studied the following variables: age, sex, location, laterality, histology, surgeons, invasion of the SSS, Simpson removal grade, follow-up time, angiography, embolization, radiotherapy, recurrence and recurrence time, reoperation, neurologic deficit, degree of dependency, and patient status at the end of follow-up. Patients ranged in age from 26 to 81 years (mean 57.86 years; median 60 years). There were 44 men (35.2%) and 81 women (64.8%). There were 57 patients with neurologic deficits (45.2%). The most common presenting symptom was motor deficit. World Health Organization grade I tumors were identified in 104 patients (84.6%), and the majority were the meningothelial type. Recurrence was detected in 34 cases. Time of recurrence was 9 to 336 months (mean: 84.4 months; median: 79.5 months). Male sex was identified as an independent risk for recurrence with relative risk 2.7 (95% confidence interval 1.21-6.15), P = 0.014. Kaplan-Meier curves for recurrence had statistically significant differences depending on sex, age, histologic type, and World Health Organization histologic grade. A binary logistic regression was made with the Hosmer-Lemeshow test with P > 0.05; sex, tumor size, and histologic type were used in this model. Male sex is an independent risk factor for recurrence that, associated with other factors such tumor size and histologic type, explains 74.5% of all cases in a binary regression model. Copyright © 2017 Elsevier Inc. All rights reserved.
Binary dislocation junction formation and strength in hexagonal close-packed crystals
Wu, Chi -Chin; Aubry, Sylvie; Arsenlis, Athanasios; ...
2015-12-17
This work examines binary dislocation interactions, junction formation and junction strengths in hexagonal close-packed ( hcp ) crystals. Through a line-tension model and dislocation dynamics (DD) simulations, the interaction and dissociation of different sets of binary junctions are investigated involving one dislocation on the (011¯0) prismatic plane and a second dislocation on one of the following planes: (0001) basal, (11¯00) prismatic, (11¯01) primary pyramidal, or (2¯112) secondary pyramidal. Varying pairs of Burgers vectors are chosen from among the common types the basal type < a > 1/3 < 112¯0 >, prismatic type < c > <0001>, and pyramidal type
Predicting Social Trust with Binary Logistic Regression
ERIC Educational Resources Information Center
Adwere-Boamah, Joseph; Hufstedler, Shirley
2015-01-01
This study used binary logistic regression to predict social trust with five demographic variables from a national sample of adult individuals who participated in The General Social Survey (GSS) in 2012. The five predictor variables were respondents' highest degree earned, race, sex, general happiness and the importance of personally assisting…
Two-Part and Related Regression Models for Longitudinal Data
Farewell, V.T.; Long, D.L.; Tom, B.D.M.; Yiu, S.; Su, L.
2017-01-01
Statistical models that involve a two-part mixture distribution are applicable in a variety of situations. Frequently, the two parts are a model for the binary response variable and a model for the outcome variable that is conditioned on the binary response. Two common examples are zero-inflated or hurdle models for count data and two-part models for semicontinuous data. Recently, there has been particular interest in the use of these models for the analysis of repeated measures of an outcome variable over time. The aim of this review is to consider motivations for the use of such models in this context and to highlight the central issues that arise with their use. We examine two-part models for semicontinuous and zero-heavy count data, and we also consider models for count data with a two-part random effects distribution. PMID:28890906
Tan, Chuen Seng; Støer, Nathalie C; Chen, Ying; Andersson, Marielle; Ning, Yilin; Wee, Hwee-Lin; Khoo, Eric Yin Hao; Tai, E-Shyong; Kao, Shih Ling; Reilly, Marie
2017-01-01
The control of confounding is an area of extensive epidemiological research, especially in the field of causal inference for observational studies. Matched cohort and case-control study designs are commonly implemented to control for confounding effects without specifying the functional form of the relationship between the outcome and confounders. This paper extends the commonly used regression models in matched designs for binary and survival outcomes (i.e. conditional logistic and stratified Cox proportional hazards) to studies of continuous outcomes through a novel interpretation and application of logit-based regression models from the econometrics and marketing research literature. We compare the performance of the maximum likelihood estimators using simulated data and propose a heuristic argument for obtaining the residuals for model diagnostics. We illustrate our proposed approach with two real data applications. Our simulation studies demonstrate that our stratification approach is robust to model misspecification and that the distribution of the estimated residuals provides a useful diagnostic when the strata are of moderate size. In our applications to real data, we demonstrate that parity and menopausal status are associated with percent mammographic density, and that the mean level and variability of inpatient blood glucose readings vary between medical and surgical wards within a national tertiary hospital. Our work highlights how the same class of regression models, available in most statistical software, can be used to adjust for confounding in the study of binary, time-to-event and continuous outcomes.
Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston
2016-10-28
Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.
Are math readiness and personality predictive of first-year retention in engineering?
Moses, Laurie; Hall, Cathy; Wuensch, Karl; De Urquidi, Karen; Kauffmann, Paul; Swart, William; Duncan, Steve; Dixon, Gene
2011-01-01
On the basis of J. G. Borkowski, L. K. Chan, and N. Muthukrishna's model of academic success (2000), the present authors hypothesized that freshman retention in an engineering program would be related to not only basic aptitude but also affective factors. Participants were 129 college freshmen with engineering as their stated major. Aptitude was measured by SAT verbal and math scores, high school grade-point average (GPA), and an assessment of calculus readiness. Affective factors were assessed by the NEO-Five Factor Inventory (FFI; P. I. Costa & R. R. McCrae, 2007), and the Nowicki-Duke Locus of Control (LOC) scale (S. Nowicki & M. Duke, 1974). A binary logistic regression analysis found that calculus readiness and high school GPA were predictive of retention. Scores on the Neuroticism and Openness subscales from the NEO-FFI and LOC were correlated with retention status, but Openness was the only affective factor with a significant unique effect in the binary logistic regression. Results of the study lend modest support to Borkowski's model.
[Willingness of Patients with Obesity to Use New Media in Rehabilitation Aftercare].
Dorow, M; Löbner, M; Stein, J; Kind, P; Markert, J; Keller, J; Weidauer, E; Riedel-Heller, S G
2017-06-01
Digital media offer new possibilities in rehabilitation aftercare. This study investigates the rehabilitants' willingness to use new media (sms, internet, social networks) in rehabilitation aftercare and factors that are associated with the willingness to use media-based aftercare. 92 rehabilitants (patients with obesity) filled in a questionnaire on the willingness to use new media in rehabilitation aftercare. In order to identify influencing factors, binary logistic regression models were calculated. 3 quarters of the rehabilitants (76.1%) reported that they would be willing to use new media in rehabilitation aftercare. The binary logistic regression model yielded two factors that were associated with the willingness to use media-based aftercare: the possession of a smartphone and the willingness to receive telephone counseling for aftercare. The majority of the rehabilitants was willing to use new media in rehabilitation aftercare. The reasons for refusal of media-based aftercare need to be examined more closely. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; He, Youyou
2013-05-01
A transverse magnetic field was introduced to the arc plasma in the process of welding stainless steel tubes by high-speed Tungsten Inert Gas Arc Welding (TIG for short) without filler wire. The influence of external magnetic field on welding quality was investigated. 9 sets of parameters were designed by the means of orthogonal experiment. The welding joint tensile strength and form factor of weld were regarded as the main standards of welding quality. A binary quadratic nonlinear regression equation was established with the conditions of magnetic induction and flow rate of Ar gas. The residual standard deviation was calculated to adjust the accuracy of regression model. The results showed that, the regression model was correct and effective in calculating the tensile strength and aspect ratio of weld. Two 3D regression models were designed respectively, and then the impact law of magnetic induction on welding quality was researched.
Deletion Diagnostics for Alternating Logistic Regressions
Preisser, John S.; By, Kunthel; Perin, Jamie; Qaqish, Bahjat F.
2013-01-01
Deletion diagnostics are introduced for the regression analysis of clustered binary outcomes estimated with alternating logistic regressions, an implementation of generalized estimating equations (GEE) that estimates regression coefficients in a marginal mean model and in a model for the intracluster association given by the log odds ratio. The diagnostics are developed within an estimating equations framework that recasts the estimating functions for association parameters based upon conditional residuals into equivalent functions based upon marginal residuals. Extensions of earlier work on GEE diagnostics follow directly, including computational formulae for one-step deletion diagnostics that measure the influence of a cluster of observations on the estimated regression parameters and on the overall marginal mean or association model fit. The diagnostic formulae are evaluated with simulations studies and with an application concerning an assessment of factors associated with health maintenance visits in primary care medical practices. The application and the simulations demonstrate that the proposed cluster-deletion diagnostics for alternating logistic regressions are good approximations of their exact fully iterated counterparts. PMID:22777960
Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.
2017-01-01
Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
ERIC Educational Resources Information Center
Valenti, Alix; Schneider, Marguerite
2012-01-01
This paper utilizes the behavioral agency model to investigate why many formerly public companies have been converted to privately held corporations. Using a matched pairs sample and categorical binary regression, and controlling for effects found in previous studies, we explore how the equity ownership of those entrusted to manage firms, the…
An Examination of Master's Student Retention & Completion
ERIC Educational Resources Information Center
Barry, Melissa; Mathies, Charles
2011-01-01
This study was conducted at a research-extensive public university in the southeastern United States. It examined the retention and completion of master's degree students across numerous disciplines. Results were derived from a series of descriptive statistics, T-tests, and a series of binary logistic regression models. The findings from binary…
Use of generalized ordered logistic regression for the analysis of multidrug resistance data.
Agga, Getahun E; Scott, H Morgan
2015-10-01
Statistical analysis of antimicrobial resistance data largely focuses on individual antimicrobial's binary outcome (susceptible or resistant). However, bacteria are becoming increasingly multidrug resistant (MDR). Statistical analysis of MDR data is mostly descriptive often with tabular or graphical presentations. Here we report the applicability of generalized ordinal logistic regression model for the analysis of MDR data. A total of 1,152 Escherichia coli, isolated from the feces of weaned pigs experimentally supplemented with chlortetracycline (CTC) and copper, were tested for susceptibilities against 15 antimicrobials and were binary classified into resistant or susceptible. The 15 antimicrobial agents tested were grouped into eight different antimicrobial classes. We defined MDR as the number of antimicrobial classes to which E. coli isolates were resistant ranging from 0 to 8. Proportionality of the odds assumption of the ordinal logistic regression model was violated only for the effect of treatment period (pre-treatment, during-treatment and post-treatment); but not for the effect of CTC or copper supplementation. Subsequently, a partially constrained generalized ordinal logistic model was built that allows for the effect of treatment period to vary while constraining the effects of treatment (CTC and copper supplementation) to be constant across the levels of MDR classes. Copper (Proportional Odds Ratio [Prop OR]=1.03; 95% CI=0.73-1.47) and CTC (Prop OR=1.1; 95% CI=0.78-1.56) supplementation were not significantly associated with the level of MDR adjusted for the effect of treatment period. MDR generally declined over the trial period. In conclusion, generalized ordered logistic regression can be used for the analysis of ordinal data such as MDR data when the proportionality assumptions for ordered logistic regression are violated. Published by Elsevier B.V.
Face Alignment via Regressing Local Binary Features.
Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian
2016-03-01
This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.
Gretchen G. Moisen; Elizabeth A. Freeman; Jock A. Blackard; Tracey S. Frescino; Niklaus E. Zimmermann; Thomas C. Edwards
2006-01-01
Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's© See5 and Cubist (for binary and continuous responses,...
The Mantel-Haenszel procedure revisited: models and generalizations.
Fidler, Vaclav; Nagelkerke, Nico
2013-01-01
Several statistical methods have been developed for adjusting the Odds Ratio of the relation between two dichotomous variables X and Y for some confounders Z. With the exception of the Mantel-Haenszel method, commonly used methods, notably binary logistic regression, are not symmetrical in X and Y. The classical Mantel-Haenszel method however only works for confounders with a limited number of discrete strata, which limits its utility, and appears to have no basis in statistical models. Here we revisit the Mantel-Haenszel method and propose an extension to continuous and vector valued Z. The idea is to replace the observed cell entries in strata of the Mantel-Haenszel procedure by subject specific classification probabilities for the four possible values of (X,Y) predicted by a suitable statistical model. For situations where X and Y can be treated symmetrically we propose and explore the multinomial logistic model. Under the homogeneity hypothesis, which states that the odds ratio does not depend on Z, the logarithm of the odds ratio estimator can be expressed as a simple linear combination of three parameters of this model. Methods for testing the homogeneity hypothesis are proposed. The relationship between this method and binary logistic regression is explored. A numerical example using survey data is presented.
The Mantel-Haenszel Procedure Revisited: Models and Generalizations
Fidler, Vaclav; Nagelkerke, Nico
2013-01-01
Several statistical methods have been developed for adjusting the Odds Ratio of the relation between two dichotomous variables X and Y for some confounders Z. With the exception of the Mantel-Haenszel method, commonly used methods, notably binary logistic regression, are not symmetrical in X and Y. The classical Mantel-Haenszel method however only works for confounders with a limited number of discrete strata, which limits its utility, and appears to have no basis in statistical models. Here we revisit the Mantel-Haenszel method and propose an extension to continuous and vector valued Z. The idea is to replace the observed cell entries in strata of the Mantel-Haenszel procedure by subject specific classification probabilities for the four possible values of (X,Y) predicted by a suitable statistical model. For situations where X and Y can be treated symmetrically we propose and explore the multinomial logistic model. Under the homogeneity hypothesis, which states that the odds ratio does not depend on Z, the logarithm of the odds ratio estimator can be expressed as a simple linear combination of three parameters of this model. Methods for testing the homogeneity hypothesis are proposed. The relationship between this method and binary logistic regression is explored. A numerical example using survey data is presented. PMID:23516463
Austin, Peter C.; Stryhn, Henrik; Leckie, George; Merlo, Juan
2017-01-01
Multilevel data occur frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models. These models incorporate cluster‐specific random effects that allow one to partition the total variation in the outcome into between‐cluster variation and between‐individual variation. The magnitude of the effect of clustering provides a measure of the general contextual effect. When outcomes are binary or time‐to‐event in nature, the general contextual effect can be quantified by measures of heterogeneity like the median odds ratio or the median hazard ratio, respectively, which can be calculated from a multilevel regression model. Outcomes that are integer counts denoting the number of times that an event occurred are common in epidemiological and medical research. The median (incidence) rate ratio in multilevel Poisson regression for counts that corresponds to the median odds ratio or median hazard ratio for binary or time‐to‐event outcomes respectively is relatively unknown and is rarely used. The median rate ratio is the median relative change in the rate of the occurrence of the event when comparing identical subjects from 2 randomly selected different clusters that are ordered by rate. We also describe how the variance partition coefficient, which denotes the proportion of the variation in the outcome that is attributable to between‐cluster differences, can be computed with count outcomes. We illustrate the application and interpretation of these measures in a case study analyzing the rate of hospital readmission in patients discharged from hospital with a diagnosis of heart failure. PMID:29114926
Predicting the "graduate on time (GOT)" of PhD students using binary logistics regression model
NASA Astrophysics Data System (ADS)
Shariff, S. Sarifah Radiah; Rodzi, Nur Atiqah Mohd; Rahman, Kahartini Abdul; Zahari, Siti Meriam; Deni, Sayang Mohd
2016-10-01
Malaysian government has recently set a new goal to produce 60,000 Malaysian PhD holders by the year 2023. As a Malaysia's largest institution of higher learning in terms of size and population which offers more than 500 academic programmes in a conducive and vibrant environment, UiTM has taken several initiatives to fill up the gap. Strategies to increase the numbers of graduates with PhD are a process that is challenging. In many occasions, many have already identified that the struggle to get into the target set is even more daunting, and that implementation is far too ideal. This has further being progressing slowly as the attrition rate increases. This study aims to apply the proposed models that incorporates several factors in predicting the number PhD students that will complete their PhD studies on time. Binary Logistic Regression model is proposed and used on the set of data to determine the number. The results show that only 6.8% of the 2014 PhD students are predicted to graduate on time and the results are compared wih the actual number for validation purpose.
Attitudes towards Participation in Business Development Programmes: An Ethnic Comparison in Sweden
ERIC Educational Resources Information Center
Abbasian, Saeid; Yazdanfar, Darush
2015-01-01
Purpose: The aim of the study is to investigate whether there are any differences between the attitudes towards participation in development programmes of entrepreneurs who are immigrants and those who are native-born. Design/methodology/approach: Several statistical methods, including a binary logistic regression model, were used to analyse a…
2004 Carolyn Sherif Award Address: Heart Disease and Gender Inequity
ERIC Educational Resources Information Center
Travis, Cheryl Brown
2005-01-01
Individual patient records from the National Hospital Discharge Survey for 1988 and 1998 comprising approximately 10 million cases were the basis for a binary logistic regression model to predict coronary artery bypass graft. Patterns in 1988 and in 1998 indicated a dramatic and pernicious gender discrepancy in medical decisions involving bypass…
NASA Astrophysics Data System (ADS)
Basak, Subhash C.; Mills, Denise; Hawkins, Douglas M.
2008-06-01
A hierarchical classification study was carried out based on a set of 70 chemicals—35 which produce allergic contact dermatitis (ACD) and 35 which do not. This approach was implemented using a regular ridge regression computer code, followed by conversion of regression output to binary data values. The hierarchical descriptor classes used in the modeling include topostructural (TS), topochemical (TC), and quantum chemical (QC), all of which are based solely on chemical structure. The concordance, sensitivity, and specificity are reported. The model based on the TC descriptors was found to be the best, while the TS model was extremely poor.
glmnetLRC f/k/a lrc package: Logistic Regression Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-06-09
Methods for fitting and predicting logistic regression classifiers (LRC) with an arbitrary loss function using elastic net or best subsets. This package adds additional model fitting features to the existing glmnet and bestglm R packages. This package was created to perform the analyses described in Amidan BG, Orton DJ, LaMarche BL, et al. 2014. Signatures for Mass Spectrometry Data Quality. Journal of Proteome Research. 13(4), 2215-2222. It makes the model fitting available in the glmnet and bestglm packages more general by identifying optimal model parameters via cross validation with an customizable loss function. It also identifies the optimal threshold formore » binary classification.« less
Multicomponent ionic liquid CMC prediction.
Kłosowska-Chomiczewska, I E; Artichowicz, W; Preiss, U; Jungnickel, C
2017-09-27
We created a model to predict CMC of ILs based on 704 experimental values published in 43 publications since 2000. Our model was able to predict CMC of variety of ILs in binary or ternary system in a presence of salt or alcohol. The molecular volume of IL (V m ), solvent-accessible surface (Ŝ), solvation enthalpy (Δ solv G ∞ ), concentration of salt (C s ) or alcohol (C a ) and their molecular volumes (V ms and V ma , respectively) were chosen as descriptors, and Kernel Support Vector Machine (KSVM) and Evolutionary Algorithm (EA) as regression methodologies to create the models. Data was split into training and validation set (80/20) and subjected to bootstrap aggregation. KSVM provided better fit with average R 2 of 0.843, and MSE of 0.608, whereas EA resulted in R 2 of 0.794 and MSE of 0.973. From the sensitivity analysis it was shown that V m and Ŝ have the highest impact on ILs micellization in both binary and ternary systems, however surprisingly in the presence of alcohol the V m becomes insignificant/irrelevant. Micelle stabilizing or destabilizing influence of the descriptors depends upon the additives. Previous attempts at modelling the CMC of ILs was generally limited to small number of ILs in simplified (binary) systems. We however showed successful prediction of the CMC over a range of different systems (binary and ternary).
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Risk estimation using probability machines.
Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D
2014-03-01
Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.
Detecting nonsense for Chinese comments based on logistic regression
NASA Astrophysics Data System (ADS)
Zhuolin, Ren; Guang, Chen; Shu, Chen
2016-07-01
To understand cyber citizens' opinion accurately from Chinese news comments, the clear definition on nonsense is present, and a detection model based on logistic regression (LR) is proposed. The detection of nonsense can be treated as a binary-classification problem. Besides of traditional lexical features, we propose three kinds of features in terms of emotion, structure and relevance. By these features, we train an LR model and demonstrate its effect in understanding Chinese news comments. We find that each of proposed features can significantly promote the result. In our experiments, we achieve a prediction accuracy of 84.3% which improves the baseline 77.3% by 7%.
Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang
2018-05-01
Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
Smooth Scalar-on-Image Regression via Spatial Bayesian Variable Selection
Goldsmith, Jeff; Huang, Lei; Crainiceanu, Ciprian M.
2013-01-01
We develop scalar-on-image regression models when images are registered multidimensional manifolds. We propose a fast and scalable Bayes inferential procedure to estimate the image coefficient. The central idea is the combination of an Ising prior distribution, which controls a latent binary indicator map, and an intrinsic Gaussian Markov random field, which controls the smoothness of the nonzero coefficients. The model is fit using a single-site Gibbs sampler, which allows fitting within minutes for hundreds of subjects with predictor images containing thousands of locations. The code is simple and is provided in less than one page in the Appendix. We apply this method to a neuroimaging study where cognitive outcomes are regressed on measures of white matter microstructure at every voxel of the corpus callosum for hundreds of subjects. PMID:24729670
Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V
2012-01-01
In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999
ERIC Educational Resources Information Center
Albaqshi, Amani Mohammed H.
2017-01-01
Functional Data Analysis (FDA) has attracted substantial attention for the last two decades. Within FDA, classifying curves into two or more categories is consistently of interest to scientists, but multi-class prediction within FDA is challenged in that most classification tools have been limited to binary response applications. The functional…
The logistic model for predicting the non-gonoactive Aedes aegypti females.
Reyes-Villanueva, Filiberto; Rodríguez-Pérez, Mario A
2004-01-01
To estimate, using logistic regression, the likelihood of occurrence of a non-gonoactive Aedes aegypti female, previously fed human blood, with relation to body size and collection method. This study was conducted in Monterrey, Mexico, between 1994 and 1996. Ten samplings of 60 mosquitoes of Ae. aegypti females were carried out in three dengue endemic areas: six of biting females, two of emerging mosquitoes, and two of indoor resting females. Gravid females, as well as those with blood in the gut were removed. Mosquitoes were taken to the laboratory and engorged on human blood. After 48 hours, ovaries were dissected to register whether they were gonoactive or non-gonoactive. Wing-length in mm was an indicator for body size. The logistic regression model was used to assess the likelihood of non-gonoactivity, as a binary variable, in relation to wing-length and collection method. Of the 600 females, 164 (27%) remained non-gonoactive, with a wing-length range of 1.9-3.2 mm, almost equal to that of all females (1.8-3.3 mm). The logistic regression model showed a significant likelihood of a female remaining non-gonoactive (Y=1). The collection method did not influence the binary response, but there was an inverse relationship between non-gonoactivity and wing-length. Dengue vector populations from Monterrey, Mexico display a wide-range body size. Logistic regression was a useful tool to estimate the likelihood for an engorged female to remain non-gonoactive. The necessity for a second blood meal is present in any female, but small mosquitoes are more likely to bite again within a 2-day interval, in order to attain egg maturation. The English version of this paper is available too at: http://www.insp.mx/salud/index.html.
Quantitative Structure – Property Relationship Modeling of Remote Liposome Loading Of Drugs
Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram
2012-01-01
Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a dataset including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and five-fold external validation. The external prediction accuracy for binary models was as high as 91–96%; for continuous models the mean coefficient R2 for regression between predicted versus observed values was 0.76–0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932
NASA Astrophysics Data System (ADS)
Nong, Yu; Du, Qingyun; Wang, Kun; Miao, Lei; Zhang, Weiwei
2008-10-01
Urban growth modeling, one of the most important aspects of land use and land cover change study, has attracted substantial attention because it helps to comprehend the mechanisms of land use change thus helps relevant policies made. This study applied multinomial logistic regression to model urban growth in the Jiayu county of Hubei province, China to discover the relationship between urban growth and the driving forces of which biophysical and social-economic factors are selected as independent variables. This type of regression is similar to binary logistic regression, but it is more general because the dependent variable is not restricted to two categories, as those previous studies did. The multinomial one can simulate the process of multiple land use competition between urban land, bare land, cultivated land and orchard land. Taking the land use type of Urban as reference category, parameters could be estimated with odds ratio. A probability map is generated from the model to predict where urban growth will occur as a result of the computation.
The As-Cu-Ni System: A Chemical Thermodynamic Model for Ancient Recycling
NASA Astrophysics Data System (ADS)
Sabatini, Benjamin J.
2015-12-01
This article is the first thermodynamically reasoned ancient metal system assessment intended for use by archaeologists and archaeometallurgists to aid in the interpretation of remelted/recycled copper alloys composed of arsenic and copper, and arsenic, copper, and nickel. These models are meant to fulfill two main purposes: first, to be applied toward the identification of progressive and regressive temporal changes in artifact chemistry that would have occurred due to recycling, and second, to provide thermodynamic insight into why such metal combinations existed in antiquity. Built on well-established thermodynamics, these models were created using a combination of custom-written software and published binary thermodynamic systems data adjusted to within the boundary conditions of 1200°C and 1 atm. Using these parameters, the behavior of each element and their likelihood of loss in the binaries As-Cu, As-Ni, Cu-Ni, and ternary As-Cu-Ni, systems, under assumed ancient furnace conditions, was determined.
Tarafder, Sumit; Toukir Ahmed, Md; Iqbal, Sumaiya; Tamjidul Hoque, Md; Sohel Rahman, M
2018-03-14
Accessible surface area (ASA) of a protein residue is an effective feature for protein structure prediction, binding region identification, fold recognition problems etc. Improving the prediction of ASA by the application of effective feature variables is a challenging but explorable task to consider, specially in the field of machine learning. Among the existing predictors of ASA, REGAd 3 p is a highly accurate ASA predictor which is based on regularized exact regression with polynomial kernel of degree 3. In this work, we present a new predictor RBSURFpred, which extends REGAd 3 p on several dimensions by incorporating 58 physicochemical, evolutionary and structural properties into 9-tuple peptides via Chou's general PseAAC, which allowed us to obtain higher accuracies in predicting both real-valued and binary ASA. We have compared RBSURFpred for both real and binary space predictions with state-of-the-art predictors, such as REGAd 3 p and SPIDER2. We also have carried out a rigorous analysis of the performance of RBSURFpred in terms of different amino acids and their properties, and also with biologically relevant case-studies. The performance of RBSURFpred establishes itself as a useful tool for the community. Copyright © 2018 Elsevier Ltd. All rights reserved.
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test
NASA Technical Reports Server (NTRS)
Messer, Bradley
2007-01-01
Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.
Schäffer, Beat; Pieren, Reto; Mendolia, Franco; Basner, Mathias; Brink, Mark
2017-05-01
Noise exposure-response relationships are used to estimate the effects of noise on individuals or a population. Such relationships may be derived from independent or repeated binary observations, and modeled by different statistical methods. Depending on the method by which they were established, their application in population risk assessment or estimation of individual responses may yield different results, i.e., predict "weaker" or "stronger" effects. As far as the present body of literature on noise effect studies is concerned, however, the underlying statistical methodology to establish exposure-response relationships has not always been paid sufficient attention. This paper gives an overview on two statistical approaches (subject-specific and population-averaged logistic regression analysis) to establish noise exposure-response relationships from repeated binary observations, and their appropriate applications. The considerations are illustrated with data from three noise effect studies, estimating also the magnitude of differences in results when applying exposure-response relationships derived from the two statistical approaches. Depending on the underlying data set and the probability range of the binary variable it covers, the two approaches yield similar to very different results. The adequate choice of a specific statistical approach and its application in subsequent studies, both depending on the research question, are therefore crucial.
ERIC Educational Resources Information Center
Lichtenberger, Eric; George-Jackson, Casey
2013-01-01
This study examined how various individual, family, and school level contextual factors impact the likelihood of planning to major in one of the science, technology, engineering, or mathematics (STEM) fields for high school students. A binary logistic regression model was developed to determine the extent to which each of the covariates helped to…
ERIC Educational Resources Information Center
Obasaju, Mayowa A.; Palin, Frances L.; Jacobs, Carli; Anderson, Page; Kaslow, Nadine J.
2009-01-01
An ecological model is used to explore the moderating effects of community-level variables on the relation between childhood sexual, physical, and emotional abuse and adult intimate partner violence (IPV) within a sample of 98 African American women from low incomes. Results from hierarchical, binary logistics regressions analyses show that…
Yusuf, O B; Bamgboye, E A; Afolabi, R F; Shodimu, M A
2014-09-01
Logistic regression model is widely used in health research for description and predictive purposes. Unfortunately, most researchers are sometimes not aware that the underlying principles of the techniques have failed when the algorithm for maximum likelihood does not converge. Young researchers particularly postgraduate students may not know why separation problem whether quasi or complete occurs, how to identify it and how to fix it. This study was designed to critically evaluate convergence issues in articles that employed logistic regression analysis published in an African Journal of Medicine and medical sciences between 2004 and 2013. Problems of quasi or complete separation were described and were illustrated with the National Demographic and Health Survey dataset. A critical evaluation of articles that employed logistic regression was conducted. A total of 581 articles was reviewed, of which 40 (6.9%) used binary logistic regression. Twenty-four (60.0%) stated the use of logistic regression model in the methodology while none of the articles assessed model fit. Only 3 (12.5%) properly described the procedures. Of the 40 that used the logistic regression model, the problem of convergence occurred in 6 (15.0%) of the articles. Logistic regression tends to be poorly reported in studies published between 2004 and 2013. Our findings showed that the procedure may not be well understood by researchers since very few described the process in their reports and may be totally unaware of the problem of convergence or how to deal with it.
Sze, N N; Wong, S C; Lee, C Y
2014-12-01
In past several decades, many countries have set quantified road safety targets to motivate transport authorities to develop systematic road safety strategies and measures and facilitate the achievement of continuous road safety improvement. Studies have been conducted to evaluate the association between the setting of quantified road safety targets and road fatality reduction, in both the short and long run, by comparing road fatalities before and after the implementation of a quantified road safety target. However, not much work has been done to evaluate whether the quantified road safety targets are actually achieved. In this study, we used a binary logistic regression model to examine the factors - including vehicle ownership, fatality rate, and national income, in addition to level of ambition and duration of target - that contribute to a target's success. We analyzed 55 quantified road safety targets set by 29 countries from 1981 to 2009, and the results indicate that targets that are in progress and with lower level of ambitions had a higher likelihood of eventually being achieved. Moreover, possible interaction effects on the association between level of ambition and the likelihood of success are also revealed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lombardo, L.; Cama, M.; Maerker, M.; Parisi, L.; Rotigliano, E.
2014-12-01
This study aims at comparing the performances of Binary Logistic Regression (BLR) and Boosted Regression Trees (BRT) methods in assessing landslide susceptibility for multiple-occurrence regional landslide events within the Mediterranean region. A test area was selected in the north-eastern sector of Sicily (southern Italy), corresponding to the catchments of the Briga and the Giampilieri streams both stretching for few kilometres from the Peloritan ridge (eastern Sicily, Italy) to the Ionian sea. This area was struck on the 1st October 2009 by an extreme climatic event resulting in thousands of rapid shallow landslides, mainly of debris flows and debris avalanches types involving the weathered layer of a low to high grade metamorphic bedrock. Exploiting the same set of predictors and the 2009 landslide archive, BLR- and BRT-based susceptibility models were obtained for the two catchments separately, adopting a random partition (RP) technique for validation; besides, the models trained in one of the two catchments (Briga) were tested in predicting the landslide distribution in the other (Giampilieri), adopting a spatial partition (SP) based validation procedure. All the validation procedures were based on multi-folds tests so to evaluate and compare the reliability of the fitting, the prediction skill, the coherence in the predictor selection and the precision of the susceptibility estimates. All the obtained models for the two methods produced very high predictive performances, with a general congruence between BLR and BRT in the predictor importance. In particular, the research highlighted that BRT-models reached a higher prediction performance with respect to BLR-models, for RP based modelling, whilst for the SP-based models the difference in predictive skills between the two methods dropped drastically, converging to an analogous excellent performance. However, when looking at the precision of the probability estimates, BLR demonstrated to produce more robust models in terms of selected predictors and coefficients, as well as of dispersion of the estimated probabilities around the mean value for each mapped pixel. The difference in the behaviour could be interpreted as the result of overfitting effects, which heavily affect decision tree classification more than logistic regression techniques.
NASA Astrophysics Data System (ADS)
Ozdemir, Adnan
2011-07-01
SummaryThe purpose of this study is to produce a groundwater spring potential map of the Sultan Mountains in central Turkey, based on a logistic regression method within a Geographic Information System (GIS) environment. Using field surveys, the locations of the springs (440 springs) were determined in the study area. In this study, 17 spring-related factors were used in the analysis: geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transport capacity index, distance to drainage, distance to fault, drainage density, and fault density map. The coefficients of the predictor variables were estimated using binary logistic regression analysis and were used to calculate the groundwater spring potential for the entire study area. The accuracy of the final spring potential map was evaluated based on the observed springs. The accuracy of the model was evaluated by calculating the relative operating characteristics. The area value of the relative operating characteristic curve model was found to be 0.82. These results indicate that the model is a good estimator of the spring potential in the study area. The spring potential map shows that the areas of very low, low, moderate and high groundwater spring potential classes are 105.586 km 2 (28.99%), 74.271 km 2 (19.906%), 101.203 km 2 (27.14%), and 90.05 km 2 (24.671%), respectively. The interpretations of the potential map showed that stream power index, relative permeability of lithologies, geology, elevation, aspect, wetness index, plan curvature, and drainage density play major roles in spring occurrence and distribution in the Sultan Mountains. The logistic regression approach has not yet been used to delineate groundwater potential zones. In this study, the logistic regression method was used to locate potential zones for groundwater springs in the Sultan Mountains. The evolved model was found to be in strong agreement with the available groundwater spring test data. Hence, this method can be used routinely in groundwater exploration under favourable conditions.
ERIC Educational Resources Information Center
Mabula, Salyungu
2015-01-01
This study investigated the performance of secondary school students in Mathematics at the Selected Secondary Schools in Mtwara Municipality and Ilemela District by Absenteeism, Conduct, Type of School and Gender as explanatory Factors. The data used in the study was collected from documented records of 250 form three students with 1:1 gender…
Stochastic model search with binary outcomes for genome-wide association studies.
Russu, Alberto; Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-06-01
The spread of case-control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model.
NASA Astrophysics Data System (ADS)
Singh, Neetu; Balomajumder, Chandrajit
2017-10-01
In this study, simultaneous removal of phenol and cyanide by a microorganism S. odorifera (MTCC 5700) immobilized onto coconut shell activated carbon surface (CSAC) was studied in batch reactor from mono and binary component aqueous solution. Activated carbon was derived from coconut shell by chemical activation method. Ferric chloride (Fecl3), used as surface modification agents was applied to biomass. Optimum biosorption conditions were obtained as a function of biosorbent dosage, pH, temperature, contact time and initial phenol and cyanide concentration. To define the equilibrium isotherms, experimental data were analyzed by five mono component isotherm and six binary component isotherm models. The higher uptake capacity of phenol and cyanide onto CSAC biosorbent surface was 450.02 and 2.58 mg/g, respectively. Nonlinear regression analysis was used for determining the best fit model on the basis of error functions and also for calculating the parameters involved in kinetic and isotherm models. The kinetic study results revealed that Fractal-like mixed first second order model and Brouser-Weron-Sototlongo models for phenol and cyanide were capable to offer accurate explanation of biosorption kinetic. According to the experimental data results, CSAC with immobilization of bacterium S. odorifera (MTCC 5700) seems to be an alternative and effective biosorbent for the elimination of phenol and cyanide from binary component aqueous solution.
Quantitative structure-property relationship modeling of remote liposome loading of drugs.
Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram
2012-06-10
Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a data set including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and 5-fold external validation. The external prediction accuracy for binary models was as high as 91-96%; for continuous models the mean coefficient R(2) for regression between predicted versus observed values was 0.76-0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. Copyright © 2011 Elsevier B.V. All rights reserved.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Optimization of binary thermodynamic and phase diagram data
NASA Astrophysics Data System (ADS)
Bale, Christopher W.; Pelton, A. D.
1983-03-01
An optimization technique based upon least squares regression is presented to permit the simultaneous analysis of diverse experimental binary thermodynamic and phase diagram data. Coefficients of polynomial expansions for the enthalpy and excess entropy of binary solutions are obtained which can subsequently be used to calculate the thermodynamic properties or the phase diagram. In an interactive computer-assisted analysis employing this technique, one can critically analyze a large number of diverse data in a binary system rapidly, in a manner which is fully self-consistent thermodynamically. Examples of applications to the Bi-Zn, Cd-Pb, PbCl2-KCl, LiCl-FeCl2, and Au-Ni binary systems are given.
Katsarov, Plamen; Gergov, Georgi; Alin, Aylin; Pilicheva, Bissera; Al-Degs, Yahya; Simeonov, Vasil; Kassarova, Margarita
2018-03-01
The prediction power of partial least squares (PLS) and multivariate curve resolution-alternating least squares (MCR-ALS) methods have been studied for simultaneous quantitative analysis of the binary drug combination - doxylamine succinate and pyridoxine hydrochloride. Analysis of first-order UV overlapped spectra was performed using different PLS models - classical PLS1 and PLS2 as well as partial robust M-regression (PRM). These linear models were compared to MCR-ALS with equality and correlation constraints (MCR-ALS-CC). All techniques operated within the full spectral region and extracted maximum information for the drugs analysed. The developed chemometric methods were validated on external sample sets and were applied to the analyses of pharmaceutical formulations. The obtained statistical parameters were satisfactory for calibration and validation sets. All developed methods can be successfully applied for simultaneous spectrophotometric determination of doxylamine and pyridoxine both in laboratory-prepared mixtures and commercial dosage forms.
Rogers, Paul; Stoner, Julie
2016-01-01
Regression models for correlated binary outcomes are commonly fit using a Generalized Estimating Equations (GEE) methodology. GEE uses the Liang and Zeger sandwich estimator to produce unbiased standard error estimators for regression coefficients in large sample settings even when the covariance structure is misspecified. The sandwich estimator performs optimally in balanced designs when the number of participants is large, and there are few repeated measurements. The sandwich estimator is not without drawbacks; its asymptotic properties do not hold in small sample settings. In these situations, the sandwich estimator is biased downwards, underestimating the variances. In this project, a modified form for the sandwich estimator is proposed to correct this deficiency. The performance of this new sandwich estimator is compared to the traditional Liang and Zeger estimator as well as alternative forms proposed by Morel, Pan and Mancl and DeRouen. The performance of each estimator was assessed with 95% coverage probabilities for the regression coefficient estimators using simulated data under various combinations of sample sizes and outcome prevalence values with an Independence (IND), Autoregressive (AR) and Compound Symmetry (CS) correlation structure. This research is motivated by investigations involving rare-event outcomes in aviation data. PMID:26998504
Worku, Yohannes; Muchie, Mammo
2012-01-01
Objective. The objective was to investigate factors that affect the efficient management of solid waste produced by commercial businesses operating in the city of Pretoria, South Africa. Methods. Data was gathered from 1,034 businesses. Efficiency in solid waste management was assessed by using a structural time-based model designed for evaluating efficiency as a function of the length of time required to manage waste. Data analysis was performed using statistical procedures such as frequency tables, Pearson's chi-square tests of association, and binary logistic regression analysis. Odds ratios estimated from logistic regression analysis were used for identifying key factors that affect efficiency in the proper disposal of waste. Results. The study showed that 857 of the 1,034 businesses selected for the study (83%) were found to be efficient enough with regards to the proper collection and disposal of solid waste. Based on odds ratios estimated from binary logistic regression analysis, efficiency in the proper management of solid waste was significantly influenced by 4 predictor variables. These 4 influential predictor variables are lack of adherence to waste management regulations, wrong perception, failure to provide customers with enough trash cans, and operation of businesses by employed managers, in a decreasing order of importance. PMID:23209483
Wendling, T; Jung, K; Callahan, A; Schuler, A; Shah, N H; Gallego, B
2018-06-03
There is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in "real-world" conditions, as it can provide complementary evidence to that of randomized controlled trials. Causal inference from health care databases is challenging because the data are typically noisy, high dimensional, and most importantly, observational. It requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions. Bayesian additive regression trees, causal forests, causal boosting, and causal multivariate adaptive regression splines are off-the-shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes. However, it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex. In this study, we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies. We focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand. To emulate health care database studies, we propose a simulation design where real covariate and treatment assignment data are used and only outcomes are simulated based on nonparametric models of the real outcomes. We apply this design to 4 published observational studies that used records from 2 major health care databases in the United States. Our results suggest that Bayesian additive regression trees and causal boosting consistently provide low bias in conditional risk difference estimates in the context of health care database studies. Copyright © 2018 John Wiley & Sons, Ltd.
Temperature dependence of nucleation rate in a binary solid solution
NASA Astrophysics Data System (ADS)
Wang, H. Y.; Philippe, T.; Duguay, S.; Blavette, D.
2012-12-01
The influence of regression (partial dissolution) effects on the temperature dependence of nucleation rate in a binary solid solution has been studied theoretically. The results of the analysis are compared with the predictions of the simplest Volmer-Weber theory. Regression effects are shown to have a strong influence on the shape of the curve of nucleation rate versus temperature. The temperature TM at which the maximum rate of nucleation occurs is found to be lowered, particularly for low interfacial energy (coherent precipitation) and high-mobility species (e.g. interstitial atoms).
Multivariate meta-analysis using individual participant data
Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.
2016-01-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484
Logistic Regression: Concept and Application
ERIC Educational Resources Information Center
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
On the potential of models for location and scale for genome-wide DNA methylation data
2014-01-01
Background With the help of epigenome-wide association studies (EWAS), increasing knowledge on the role of epigenetic mechanisms such as DNA methylation in disease processes is obtained. In addition, EWAS aid the understanding of behavioral and environmental effects on DNA methylation. In terms of statistical analysis, specific challenges arise from the characteristics of methylation data. First, methylation β-values represent proportions with skewed and heteroscedastic distributions. Thus, traditional modeling strategies assuming a normally distributed response might not be appropriate. Second, recent evidence suggests that not only mean differences but also variability in site-specific DNA methylation associates with diseases, including cancer. The purpose of this study was to compare different modeling strategies for methylation data in terms of model performance and performance of downstream hypothesis tests. Specifically, we used the generalized additive models for location, scale and shape (GAMLSS) framework to compare beta regression with Gaussian regression on raw, binary logit and arcsine square root transformed methylation data, with and without modeling a covariate effect on the scale parameter. Results Using simulated and real data from a large population-based study and an independent sample of cancer patients and healthy controls, we show that beta regression does not outperform competing strategies in terms of model performance. In addition, Gaussian models for location and scale showed an improved performance as compared to models for location only. The best performance was observed for the Gaussian model on binary logit transformed β-values, referred to as M-values. Our results further suggest that models for location and scale are specifically sensitive towards violations of the distribution assumption and towards outliers in the methylation data. Therefore, a resampling procedure is proposed as a mode of inference and shown to diminish type I error rate in practically relevant settings. We apply the proposed method in an EWAS of BMI and age and reveal strong associations of age with methylation variability that are validated in an independent sample. Conclusions Models for location and scale are promising tools for EWAS that may help to understand the influence of environmental factors and disease-related phenotypes on methylation variability and its role during disease development. PMID:24994026
Gan, Zhaoyu; Diao, Feici; Wei, Qinling; Wu, Xiaoli; Cheng, Minfeng; Guan, Nianhong; Zhang, Ming; Zhang, Jinbei
2011-11-01
A correct timely diagnosis of bipolar depression remains a big challenge for clinicians. This study aimed to develop a clinical characteristic based model to predict the diagnosis of bipolar disorder among patients with current major depressive episodes. A prospective study was carried out on 344 patients with current major depressive episodes, with 268 completing 1-year follow-up. Data were collected through structured interviews. Univariate binary logistic regression was conducted to select potential predictive variables among 19 initial variables, and then multivariate binary logistic regression was performed to analyze the combination of risk factors and build a predictive model. Receiver operating characteristic (ROC) curve was plotted. Of 19 initial variables, 13 variables were preliminarily selected, and then forward stepwise exercise produced a final model consisting of 6 variables: age at first onset, maximum duration of depressive episodes, somatalgia, hypersomnia, diurnal variation of mood, irritability. The correct prediction rate of this model was 78% (95%CI: 75%-86%) and the area under the ROC curve was 0.85 (95%CI: 0.80-0.90). The cut-off point for age at first onset was 28.5 years old, while the cut-off point for maximum duration of depressive episode was 7.5 months. The limitations of this study include small sample size, relatively short follow-up period and lack of treatment information. Our predictive models based on six clinical characteristics of major depressive episodes prove to be robust and can help differentiate bipolar depression from unipolar depression. Copyright © 2011 Elsevier B.V. All rights reserved.
Dai, Xiaoping; Han, Yuping; Zhang, Xiaohong; Hu, Wei; Huang, Liangji; Duan, Wenpei; Li, Siyi; Liu, Xiaolu; Wang, Qian
2017-09-01
A better understanding of willingness to separate waste and waste separation behaviour can aid the design and improvement of waste management policies. Based on the intercept questionnaire survey data of undergraduate students and residents in Zhengzhou City of China, this article compared factors affecting the willingness and behaviour of students and residents to participate in waste separation using two binary logistic regression models. Improvement opportunities for waste separation were also discussed. Binary logistic regression results indicate that knowledge of and attitude to waste separation and acceptance of waste education significantly affect the willingness of undergraduate students to separate waste, and demographic factors, such as gender, age, education level, and income, significantly affect the willingness of residents to do so. Presence of waste-specific bins and attitude to waste separation are drivers of waste separation behaviour for both students and residents. Improved education about waste separation and facilities are effective to stimulate waste separation, and charging on unsorted waste may be an effective way to improve it in Zhengzhou.
A model of the evaporation of binary-fuel clusters of drops
NASA Technical Reports Server (NTRS)
Harstad, K.; Bellan, J.
1991-01-01
A formulation has been developed to describe the evaporation of dense or dilute clusters of binary-fuel drops. The binary fuel is assumed to be made of a solute and a solvent whose volatility is much lower than that of the solute. Convective flow effects, inducing a circulatory motion inside the drops, are taken into account, as well as turbulence external to the cluster volume. Results obtained with this model show that, similar to the conclusions for single isolated drops, the evaporation of the volatile is controlled by liquid mass diffusion when the cluster is dilute. In contrast, when the cluster is dense, the evaporation of the volatile is controlled by surface layer stripping, that is, by the regression rate of the drop, which is in fact controlled by the evaporation rate of the solvent. These conclusions are in agreement with existing experimental observations. Parametric studies show that these conclusions remain valid with changes in ambient temperature, initial slip velocity between drops and gas, initial drop size, initial cluster size, initial liquid mass fraction of the solute, and various combinations of solvent and solute. The implications of these results for computationally intensive combustor calculations are discussed.
Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica
2016-04-19
The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.
Wei, QianQian; Chen, XuePing; Zheng, ZhenZhen; Huang, Rui; Guo, XiaoYan; Cao, Bei; Zhao, Bi; Shang, Hui-Fang
2014-12-01
Despite growing interest, the frequency and characteristics of frontal lobe functional and behavioral deficits in Chinese people with amyotrophic lateral sclerosis (ALS), as well as their impact on the survival of ALS patients, remain unknown. The Chinese version of the frontal assessment battery (FAB) and frontal behavioral inventory (FBI) were used to evaluate 126 sporadic ALS patients and 50 healthy controls. The prevalence of frontal lobe dysfunction was 32.5%. The most notable impairment domain of the FAB was lexical fluency (30.7%). The binary logistic regression model revealed that an onset age older than 45 years (OR 5.976, P = 0.002) and a lower educational level (OR 0.858, P = 0.002) were potential determinants of an abnormal FAB. Based on the FBI score, 46.0% of patients showed varied degrees of frontal behavioral changes. The most common impaired neurobehavioral domains were irritability (25.4%), logopenia (20.6%) and apathy (19.0%). The binary logistic regression model revealed that the ALS Functional Rating Scale-Revised scale score (OR 0.127, P = 0.001) was a potential determinant of an abnormal FBI. Frontal functional impairment and the severity of frontal behavioral changes were not associated with the survival status or the progression of ALS by the cox proportional hazard model and multivariate regression analyses, respectively. Frontal lobe dysfunction and frontal behavioral changes are common in Chinese ALS patients. Frontal lobe dysfunction may be related to the onset age and educational level. The severity of frontal behavioral changes may be associated with the ALSFRS-R. However, the frontal functional impairment and the frontal behavioral changes do not worsen the progression or survival of ALS.
A comparison of multiple imputation methods for incomplete longitudinal binary data.
Yamaguchi, Yusuke; Misumi, Toshihiro; Maruo, Kazushi
2018-01-01
Longitudinal binary data are commonly encountered in clinical trials. Multiple imputation is an approach for getting a valid estimation of treatment effects under an assumption of missing at random mechanism. Although there are a variety of multiple imputation methods for the longitudinal binary data, a limited number of researches have reported on relative performances of the methods. Moreover, when focusing on the treatment effect throughout a period that has often been used in clinical evaluations of specific disease areas, no definite investigations comparing the methods have been available. We conducted an extensive simulation study to examine comparative performances of six multiple imputation methods available in the SAS MI procedure for longitudinal binary data, where two endpoints of responder rates at a specified time point and throughout a period were assessed. The simulation study suggested that results from naive approaches of a single imputation with non-responders and a complete case analysis could be very sensitive against missing data. The multiple imputation methods using a monotone method and a full conditional specification with a logistic regression imputation model were recommended for obtaining unbiased and robust estimations of the treatment effect. The methods were illustrated with data from a mental health research.
Wang, Qingliang; Li, Xiaojie; Hu, Kunpeng; Zhao, Kun; Yang, Peisheng; Liu, Bo
2015-05-12
To explore the risk factors of portal hypertensive gastropathy (PHG) in patients with hepatitis B associated cirrhosis and establish a Logistic regression model of noninvasive prediction. The clinical data of 234 hospitalized patients with hepatitis B associated cirrhosis from March 2012 to March 2014 were analyzed retrospectively. The dependent variable was the occurrence of PHG while the independent variables were screened by binary Logistic analysis. Multivariate Logistic regression was used for further analysis of significant noninvasive independent variables. Logistic regression model was established and odds ratio was calculated for each factor. The accuracy, sensitivity and specificity of model were evaluated by the curve of receiver operating characteristic (ROC). According to univariate Logistic regression, the risk factors included hepatic dysfunction, albumin (ALB), bilirubin (TB), prothrombin time (PT), platelet (PLT), white blood cell (WBC), portal vein diameter, spleen index, splenic vein diameter, diameter ratio, PLT to spleen volume ratio, esophageal varices (EV) and gastric varices (GV). Multivariate analysis showed that hepatic dysfunction (X1), TB (X2), PLT (X3) and splenic vein diameter (X4) were the major occurring factors for PHG. The established regression model was Logit P=-2.667+2.186X1-2.167X2+0.725X3+0.976X4. The accuracy of model for PHG was 79.1% with a sensitivity of 77.2% and a specificity of 80.8%. Hepatic dysfunction, TB, PLT and splenic vein diameter are risk factors for PHG and the noninvasive predicted Logistic regression model was Logit P=-2.667+2.186X1-2.167X2+0.725X3+0.976X4.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items
ERIC Educational Resources Information Center
Lee, Young-Sun
2007-01-01
This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…
A global goodness-of-fit statistic for Cox regression models.
Parzen, M; Lipsitz, S R
1999-06-01
In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.
Stochastic model search with binary outcomes for genome-wide association studies
Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-01-01
Objective The spread of case–control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Materials and methods Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. Results BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. Discussion BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. Conclusion The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model. PMID:22534080
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
[Developing a predictive model for the caregiver strain index].
Álvarez-Tello, Margarita; Casado-Mejía, Rosa; Praena-Fernández, Juan Manuel; Ortega-Calvo, Manuel
Patient homecare with multiple morbidities is an increasingly common occurrence. The caregiver strain index is tool in the form of questionnaire that is designed to measure the perceived burden of those who care for their families. The aim of this study is to construct a diagnostic nomogram of informal caregiver burden using data from a predictive model. The model was drawn up using binary logistic regression and the questionnaire items as dichotomous factors. The dependent variable was the final score obtained with the questionnaire but categorised in accordance with that in the literature. Scores between 0 and 6 were labelled as "no" (no caregiver stress) and at or greater than 7 as "yes". The version 3.1.1R statistical software was used. To construct confidence intervals for the ROC curve 2000 boot strap replicates were used. A sample of 67 caregivers was obtained. A diagnosing nomogram was made up with its calibration graph (Brier scaled = 0.686, Nagelkerke R 2 =0.791), and the corresponding ROC curve (area under the curve=0.962). The predictive model generated using binary logistic regression and the nomogram contain four items (1, 4, 5 and 9) of the questionnaire. R plotting functions allow a very good solution for validating a model like this. The area under the ROC curve (0.96; 95% CI: 0.994-0.941) achieves a high discriminative value. Calibration also shows high goodness of fit values, suggesting that it may be clinically useful in community nursing and geriatric establishments. Copyright © 2015 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.
The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudzok, S., E-mail: susanne.rudzok@ufz.d; Schlink, U., E-mail: uwe.schlink@ufz.d; Herbarth, O., E-mail: olf.herbarth@medizin.uni-leipzig.d
2010-05-01
The interaction of drugs and non-therapeutic xenobiotics constitutes a central role in human health risk assessment. Still, available data are rare. Two different models have been established to predict mixture toxicity from single dose data, namely, the concentration addition (CA) and independent action (IA) model. However, chemicals can also act synergistic or antagonistic or in dose level deviation, or in a dose ratio dependent deviation. In the present study we used the MIXTOX model (EU project ENV4-CT97-0507), which incorporates these algorithms, to assess effects of the binary mixtures in the human hepatoma cell line HepG2. These cells possess a liver-likemore » enzyme pattern and a variety of xenobiotic-metabolizing enzymes (phases I and II). We tested binary mixtures of the metal nickel, the anti-inflammatory drug diclofenac, and the antibiotic agent irgasan and compared the experimental data to the mathematical models. Cell viability was determined by three different methods the MTT-, AlamarBlue (registered) and NRU assay. The compounds were tested separately and in combinations. We could show that the metal nickel is the dominant component in the mixture, affecting an antagonism at low-dose levels and a synergism at high-dose levels in combination with diclofenac or irgasan, when using the NRU and the AlamarBlue assay. The dose-response surface of irgasan and diclofenac indicated a concentration addition. The experimental data could be described by the algorithms with a regression of up to 90%, revealing the HepG2 cell line and the MIXTOX model as valuable tool for risk assessment of binary mixtures for cytotoxic endpoints. However the model failed to predict a specific mode of action, the CYP1A1 enzyme activity.« less
Intermediate and advanced topics in multilevel logistic regression analysis
Merlo, Juan
2017-01-01
Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher‐level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within‐cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population‐average effect of covariates measured at the subject and cluster level, in contrast to the within‐cluster or cluster‐specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster‐level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28543517
Intermediate and advanced topics in multilevel logistic regression analysis.
Austin, Peter C; Merlo, Juan
2017-09-10
Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Karami, K.; Mohebi, R.
2007-08-01
We introduce a new method to derive the orbital parameters of spectroscopic binary stars by nonlinear least squares of (o-c). Using the measured radial velocity data of the four double lined spectroscopic binary systems, AI Phe, GM Dra, HD 93917 and V502 Oph, we derived both the orbital and combined spectroscopic elements of these systems. Our numerical results are in good agreement with the those obtained using the method of Lehmann-Filhé.
Falk Delgado, Alberto; Falk Delgado, Anna
2017-07-26
Describe the prevalence and types of conflicts of interest (COI) in published randomized controlled trials (RCTs) in general medical journals with a binary primary outcome and assess the association between conflicts of interest and favorable outcome. Parallel-group RCTs with a binary primary outcome published in three general medical journals during 2013-2015 were identified. COI type, funding source, and outcome were extracted. Binomial logistic regression model was performed to assess association between COI and funding source with outcome. A total of 509 consecutive parallel-group RCTs were included in the study. COI was reported in 74% in mixed funded RCTs and in 99% in for-profit funded RCTs. Stock ownership was reported in none of the non-profit RCTs, in 7% of mixed funded RCTs, and in 50% of for-profit funded RCTs. Mixed-funded RCTs had employees from the funding company in 11% and for-profit RCTs in 76%. Multivariable logistic regression revealed that stock ownership in the funding company among any of the authors was associated with a favorable outcome (odds ratio = 3.53; 95% confidence interval = 1.59-7.86; p < 0.01). COI in for-profit funded RCTs is extensive, because the factors related to COI are not fully independent, a multivariable analysis should be cautiously interpreted. However, after multivariable adjustment only stock ownership from the funding company among authors is associated with a favorable outcome.
A general equation to obtain multiple cut-off scores on a test from multinomial logistic regression.
Bersabé, Rosa; Rivas, Teresa
2010-05-01
The authors derive a general equation to compute multiple cut-offs on a total test score in order to classify individuals into more than two ordinal categories. The equation is derived from the multinomial logistic regression (MLR) model, which is an extension of the binary logistic regression (BLR) model to accommodate polytomous outcome variables. From this analytical procedure, cut-off scores are established at the test score (the predictor variable) at which an individual is as likely to be in category j as in category j+1 of an ordinal outcome variable. The application of the complete procedure is illustrated by an example with data from an actual study on eating disorders. In this example, two cut-off scores on the Eating Attitudes Test (EAT-26) scores are obtained in order to classify individuals into three ordinal categories: asymptomatic, symptomatic and eating disorder. Diagnoses were made from the responses to a self-report (Q-EDD) that operationalises DSM-IV criteria for eating disorders. Alternatives to the MLR model to set multiple cut-off scores are discussed.
Is parenting style a predictor of suicide attempts in a representative sample of adolescents?
Donath, Carolin; Graessel, Elmar; Baier, Dirk; Bleich, Stefan; Hillemacher, Thomas
2014-04-26
Suicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents' suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents. In the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity. Three parental variables showed a relevant association with suicide attempts in adolescents - (all protective): mother's warmth and father's warmth in childhood and mother's control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events. Parenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk - as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD.
Hughes, James P.; Haley, Danielle F.; Frew, Paula M.; Golin, Carol E.; Adimora, Adaora A; Kuo, Irene; Justman, Jessica; Soto-Torres, Lydia; Wang, Jing; Hodder, Sally
2015-01-01
Purpose Reductions in risk behaviors are common following enrollment in HIV prevention studies. We develop methods to quantify the proportion of change in risk behaviors that can be attributed to regression to the mean versus study participation and other factors. Methods A novel model that incorporates both regression to the mean and study participation effects is developed for binary measures. The model is used to estimate the proportion of change in the prevalence of “unprotected sex in the past 6 months” that can be attributed to study participation versus regression to the mean in a longitudinal cohort of women at risk for HIV infection who were recruited from ten US communities with high rates of HIV and poverty. HIV risk behaviors were evaluated using audio computer-assisted self-interviews at baseline and every 6 months for up to 12 months. Results The prevalence of “unprotected sex in the past 6 months” declined from 96% at baseline to 77% at 12 months. However, this change could be almost completely explained by regression to the mean. Conclusions Analyses that examine changes over time in cohorts selected for high or low risk behaviors should account for regression to the mean effects. PMID:25883065
Smith, E M D; Jorgensen, A L; Beresford, M W
2017-10-01
Background Lupus nephritis (LN) affects up to 80% of juvenile-onset systemic lupus erythematosus (JSLE) patients. The value of commonly available biomarkers, such as anti-dsDNA antibodies, complement (C3/C4), ESR and full blood count parameters in the identification of active LN remains uncertain. Methods Participants from the UK JSLE Cohort Study, aged <16 years at diagnosis, were categorized as having active or inactive LN according to the renal domain of the British Isles Lupus Assessment Group score. Classic biomarkers: anti-dsDNA, C3, C4, ESR, CRP, haemoglobin, total white cells, neutrophils, lymphocytes, platelets and immunoglobulins were assessed for their ability to identify active LN using binary logistic regression modeling, with stepAIC function applied to select a final model. Receiver-operating curve analysis was used to assess diagnostic accuracy. Results A total of 370 patients were recruited; 191 (52%) had active LN and 179 (48%) had inactive LN. Binary logistic regression modeling demonstrated a combination of ESR, C3, white cell count, neutrophils, lymphocytes and IgG to be best for the identification of active LN (area under the curve 0.724). Conclusions At best, combining common classic blood biomarkers of lupus activity using multivariate analysis provides a 'fair' ability to identify active LN. Urine biomarkers were not included in these analyses. These results add to the concern that classic blood biomarkers are limited in monitoring discrete JSLE manifestations such as LN.
Islam Mondal, Md. Nazrul; Nasir Ullah, Md. Monzur Morshad; Khan, Md. Nuruzzaman; Islam, Mohammad Zamirul; Islam, Md. Nurul; Moni, Sabiha Yasmin; Hoque, Md. Nazrul; Rahman, Md. Mashiur
2015-01-01
Background: Reproductive health (RH) is a critical component of women’s health and overall well-being around the world, especially in developing countries. We examine the factors that determine knowledge of RH care among female university students in Bangladesh. Methods: Data on 300 female students were collected from Rajshahi University, Bangladesh through a structured questionnaire using purposive sampling technique. The data were used for univariate analysis, to carry out the description of the variables; bivariate analysis was used to examine the associations between the variables; and finally, multivariate analysis (binary logistic regression model) was used to examine and fit the model and interpret the parameter estimates, especially in terms of odds ratios. Results: The results revealed that more than one-third (34.3%) respondents do not have sufficient knowledge of RH care. The χ2-test identified the significant (p < 0.05) associations between respondents’ knowledge of RH care with respondents’ age, education, family type, watching television; and knowledge about pregnancy, family planning, and contraceptive use. Finally, the binary logistic regression model identified respondents’ age, education, family type; and knowledge about family planning, and contraceptive use as the significant (p < 0.05) predictors of RH care. Conclusions and Global Health Implications: Knowledge of RH care among female university students was found unsatisfactory. Government and concerned organizations should promote and strengthen various health education programs to focus on RH care especially for the female university students in Bangladesh. PMID:27622005
Depression and incident dementia. An 8-year population-based prospective study.
Luppa, Melanie; Luck, Tobias; Ritschel, Franziska; Angermeyer, Matthias C; Villringer, Arno; Riedel-Heller, Steffi G
2013-01-01
The aim of the study was to investigate the impact of depression (categorical diagnosis; major depression, MD) and depressive symptoms (dimensional diagnosis and symptom patterns) on incident dementia in the German general population. Within the Leipzig Longitudinal Study of the Aged (LEILA 75+), a representative sample of 1,265 individuals aged 75 years and older were interviewed every 1.5 years over 8 years (mean observation time 4.3 years; mean number of visits 4.2). Cox proportional hazards and binary logistic regressions were used to estimate the effect of baseline depression and depressive symptoms on incident dementia. The incidence of dementia was 48 per 1,000 person-years (95% confidence interval (CI) 45-51). Depressive symptoms (Hazard ratio HR 1.03, 95% CI 1.01-1.05), and in particular mood-related symptoms (HR 1.08, 95% CI 1.03-1.14), showed a significant impact on the incidence of dementia only in univariate analysis, but not after adjustment for cognitive and functional impairment. MD showed only a significant impact on incidence of dementia in Cox proportional hazards regression, but not in binary logistic regression models. The present study using different diagnostic measures of depression on future dementia found no clear significant associations of depression and incident dementia. Further in-depth investigation would help to understand the nature of depression in the context of incident dementia.
Jacob, Michelle M.; Gonzales, Kelly L.; Calhoun, Darren; Beals, Janette; Muller, Clemma Jacobsen; Goldberg, Jack; Nelson, Lonnie; Welty, Thomas K.; Howard, Barbara V.
2013-01-01
Aims The aims of this paper are to examine the relationship between psychological trauma symptoms and Type 2 diabetes prevalence, glucose control, and treatment modality among 3,776 American Indians in Phase V of the Strong Heart Family Study. Methods This cross-sectional analysis measured psychological trauma symptoms using the National Anxiety Disorder Screening Day instrument, diabetes by American Diabetes Association criteria, and treatment modality by four categories: no medication, oral medication only, insulin only, or both oral medication and insulin. We used binary logistic regression to evaluate the association between psychological trauma symptoms and diabetes prevalence. We used ordinary least squares regression to evaluate the association between psychological trauma symptoms and glucose control. We used binary logistic regression to model the association of psychological trauma symptoms with treatment modality. Results Neither diabetes prevalence (22-31%; p = 0.19) nor control (8.0-8.6; p = 0.25) varied significantly by psychological trauma symptoms categories. However, diabetes treatment modality was associated with psychological trauma symptoms categories, as people with greater burden used either no medication, or both oral and insulin medications (odds ratio = 3.1, p < 0.001). Conclusions The positive relationship between treatment modality and psychological trauma symptoms suggests future research investigate patient and provider treatment decision making. PMID:24051029
NASA Astrophysics Data System (ADS)
Ramesh, S. T.; Rameshbabu, N.; Gandhimathi, R.; Nidheesh, P. V.; Srikanth Kumar, M.
2012-09-01
Removal of heavy metals is very important with respect to environmental considerations. This study investigated the sorption of copper (Cu) and zinc (Zn) in single and binary aqueous systems onto laboratory prepared hydroxyapatite (HA) surfaces. Batch experiments were carried out using synthetic HA at 30 °C. Parameters that influence the adsorption such as contact time, adsorbent dosage and pH of solution were investigated. The maximum adsorption was found at contact time of 12 and 9 h, HA dosage of 0.4 and 0.7 g/l and pH of 6 and 8 for Cu and Zn, respectively, in single system. Adsorption kinetics data were analyzed using the pseudofirst-, pseudosecond-order and intraparticle diffusion models. The results indicated that the adsorption kinetic data were best described by pseudosecond-order model. Langmuir and Freundlich isotherm models were applied to analyze adsorption data, and Langmuir isotherm was found to be applicable to this adsorption system, in terms of relatively high regression values. The removal capacity of HA was found to be 125 mg of Cu/g, 30.3 mg of Zn/g in single system and 50 mg of Cu/g, 15.16 mg of Zn/g in binary system. The results indicated that the HA used in this work proved to be effective material for removing Cu and Zn from aqueous solutions.
Thakur, Jyoti; Pahuja, Sharvan Kumar; Pahuja, Roop
2017-01-01
In 2005, an international pediatric sepsis consensus conference defined systemic inflammatory response syndrome (SIRS) for children <18 years of age, but excluded premature infants. In 2012, Hofer et al. investigated the predictive power of SIRS for term neonates. In this paper, we examined the accuracy of SIRS in predicting sepsis in neonates, irrespective of their gestational age (i.e., pre-term, term, and post-term). We also created two prediction models, named Model A and Model B, using binary logistic regression. Both models performed better than SIRS. We also developed an android application so that physicians can easily use Model A and Model B in real-world scenarios. The sensitivity, specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) in cases of SIRS were 16.15%, 95.53%, 3.61, and 0.88, respectively, whereas they were 29.17%, 97.82%, 13.36, and 0.72, respectively, in the case of Model A, and 31.25%, 97.30%, 11.56, and 0.71, respectively, in the case of Model B. All models were significant with p < 0.001. PMID:29257099
Regression analysis for solving diagnosis problem of children's health
NASA Astrophysics Data System (ADS)
Cherkashina, Yu A.; Gerget, O. M.
2016-04-01
The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.
Workie, Demeke Lakew; Zike, Dereje Tesfaye; Fenta, Haile Mekonnen; Mekonnen, Mulusew Admasu
2017-09-01
Unintended pregnancy related to unmet need is a worldwide problem that affects societies. The main objective of this study was to identify the prevalence and determinants of unmet need for family planning among women aged (15-49) in Ethiopia. The Performance Monitoring and Accountability2020/Ethiopia was conducted in April 2016 at round-4 from 7494 women with two-stage-stratified sampling. Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. The prevalence of unmet-need for family planning was 16.2% in Ethiopia. Women between the age range of 15-24 years were 2.266 times more likely to have unmet need family planning compared to above 35 years. Women who were currently married were about 8 times more likely to have unmet need family planning compared to never married women. Women who had no under-five child were 0.125 times less likely to have unmet need family planning compared to those who had more than two-under-5. The key determinants of unmet need family planning in Ethiopia were residence, age, marital-status, education, household members, birth-events and number of under-5 children. Thus the Government of Ethiopia would take immediate steps to address the causes of high unmet need for family planning among women.
Multivariate meta-analysis using individual participant data.
Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R
2015-06-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
Akkus, Zeki; Camdeviren, Handan; Celik, Fatma; Gur, Ali; Nas, Kemal
2005-09-01
To determine the risk factors of osteoporosis using a multiple binary logistic regression method and to assess the risk variables for osteoporosis, which is a major and growing health problem in many countries. We presented a case-control study, consisting of 126 postmenopausal healthy women as control group and 225 postmenopausal osteoporotic women as the case group. The study was carried out in the Department of Physical Medicine and Rehabilitation, Dicle University, Diyarbakir, Turkey between 1999-2002. The data from the 351 participants were collected using a standard questionnaire that contains 43 variables. A multiple logistic regression model was then used to evaluate the data and to find the best regression model. We classified 80.1% (281/351) of the participants using the regression model. Furthermore, the specificity value of the model was 67% (84/126) of the control group while the sensitivity value was 88% (197/225) of the case group. We found the distribution of residual values standardized for final model to be exponential using the Kolmogorow-Smirnow test (p=0.193). The receiver operating characteristic curve was found successful to predict patients with risk for osteoporosis. This study suggests that low levels of dietary calcium intake, physical activity, education, and longer duration of menopause are independent predictors of the risk of low bone density in our population. Adequate dietary calcium intake in combination with maintaining a daily physical activity, increasing educational level, decreasing birth rate, and duration of breast-feeding may contribute to healthy bones and play a role in practical prevention of osteoporosis in Southeast Anatolia. In addition, the findings of the present study indicate that the use of multivariate statistical method as a multiple logistic regression in osteoporosis, which maybe influenced by many variables, is better than univariate statistical evaluation.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.
Reduction from cost-sensitive ordinal ranking to weighted binary classification.
Lin, Hsuan-Tien; Li, Ling
2012-05-01
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.
Siebers, Nina; Kruse, Jens; Eckhardt, Kai-Uwe; Hu, Yongfeng; Leinweber, Peter
2012-07-01
Cadmium (Cd) has a high toxicity and resolving its speciation in soil is challenging but essential for estimating the environmental risk. In this study partial least-square (PLS) regression was tested for its capability to deconvolute Cd L(3)-edge X-ray absorption near-edge structure (XANES) spectra of multi-compound mixtures. For this, a library of Cd reference compound spectra and a spectrum of a soil sample were acquired. A good coefficient of determination (R(2)) of Cd compounds in mixtures was obtained for the PLS model using binary and ternary mixtures of various Cd reference compounds proving the validity of this approach. In order to describe complex systems like soil, multi-compound mixtures of a variety of Cd compounds must be included in the PLS model. The obtained PLS regression model was then applied to a highly Cd-contaminated soil revealing Cd(3)(PO(4))(2) (36.1%), Cd(NO(3))(2)·4H(2)O (24.5%), Cd(OH)(2) (21.7%), CdCO(3) (17.1%) and CdCl(2) (0.4%). These preliminary results proved that PLS regression is a promising approach for a direct determination of Cd speciation in the solid phase of a soil sample.
ATLS Hypovolemic Shock Classification by Prediction of Blood Loss in Rats Using Regression Models.
Choi, Soo Beom; Choi, Joon Yul; Park, Jee Soo; Kim, Deok Won
2016-07-01
In our previous study, our input data set consisted of 78 rats, the blood loss in percent as a dependent variable, and 11 independent variables (heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, pulse pressure, respiration rate, temperature, perfusion index, lactate concentration, shock index, and new index (lactate concentration/perfusion)). The machine learning methods for multicategory classification were applied to a rat model in acute hemorrhage to predict the four Advanced Trauma Life Support (ATLS) hypovolemic shock classes for triage in our previous study. However, multicategory classification is much more difficult and complicated than binary classification. We introduce a simple approach for classifying ATLS hypovolaemic shock class by predicting blood loss in percent using support vector regression and multivariate linear regression (MLR). We also compared the performance of the classification models using absolute and relative vital signs. The accuracies of support vector regression and MLR models with relative values by predicting blood loss in percent were 88.5% and 84.6%, respectively. These were better than the best accuracy of 80.8% of the direct multicategory classification using the support vector machine one-versus-one model in our previous study for the same validation data set. Moreover, the simple MLR models with both absolute and relative values could provide possibility of the future clinical decision support system for ATLS classification. The perfusion index and new index were more appropriate with relative changes than absolute values.
Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel
2011-05-23
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.
The crux of the method: assumptions in ordinary least squares and logistic regression.
Long, Rebecca G
2008-10-01
Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.
Bayesian Analysis of High Dimensional Classification
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Subhadeep; Liang, Faming
2009-12-01
Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.
Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo
2015-02-01
Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.
Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.
2003-01-01
Logistic regression was used to predict the probability of debris flows occurring in areas recently burned by wildland fires. Multiple logistic regression is conceptually similar to multiple linear regression because statistical relations between one dependent variable and several independent variables are evaluated. In logistic regression, however, the dependent variable is transformed to a binary variable (debris flow did or did not occur), and the actual probability of the debris flow occurring is statistically modeled. Data from 399 basins located within 15 wildland fires that burned during 2000-2002 in Colorado, Idaho, Montana, and New Mexico were evaluated. More than 35 independent variables describing the burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows were delineated from National Elevation Data using a Geographic Information System (GIS). (2) Data describing the burn severity, geology, land surface gradient, rainfall, and soil properties were determined for each basin. These data were then downloaded to a statistics software package for analysis using logistic regression. (3) Relations between the occurrence/non-occurrence of debris flows and burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated and several preliminary multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combination produced the most effective model. The multivariate model that best predicted the occurrence of debris flows was selected. (4) The multivariate logistic regression model was entered into a GIS, and a map showing the probability of debris flows was constructed. The most effective model incorporates the percentage of each basin with slope greater than 30 percent, percentage of land burned at medium and high burn severity in each basin, particle size sorting, average storm intensity (millimeters per hour), soil organic matter content, soil permeability, and soil drainage. The results of this study demonstrate that logistic regression is a valuable tool for predicting the probability of debris flows occurring in recently-burned landscapes.
Hughes, James P; Haley, Danielle F; Frew, Paula M; Golin, Carol E; Adimora, Adaora A; Kuo, Irene; Justman, Jessica; Soto-Torres, Lydia; Wang, Jing; Hodder, Sally
2015-06-01
Reductions in risk behaviors are common following enrollment in human immunodeficiency virus (HIV) prevention studies. We develop methods to quantify the proportion of change in risk behaviors that can be attributed to regression to the mean versus study participation and other factors. A novel model that incorporates both regression to the mean and study participation effects is developed for binary measures. The model is used to estimate the proportion of change in the prevalence of "unprotected sex in the past 6 months" that can be attributed to study participation versus regression to the mean in a longitudinal cohort of women at risk for HIV infection who were recruited from ten U.S. communities with high rates of HIV and poverty. HIV risk behaviors were evaluated using audio computer-assisted self-interviews at baseline and every 6 months for up to 12 months. The prevalence of "unprotected sex in the past 6 months" declined from 96% at baseline to 77% at 12 months. However, this change could be almost completely explained by regression to the mean. Analyses that examine changes over time in cohorts selected for high- or low- risk behaviors should account for regression to the mean effects. Copyright © 2015 Elsevier Inc. All rights reserved.
Genome-wide regression and prediction with the BGLR statistical package.
Pérez, Paulino; de los Campos, Gustavo
2014-10-01
Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.
Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM
ERIC Educational Resources Information Center
Warner, Rebecca M.
2007-01-01
This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…
London Measure of Unplanned Pregnancy: guidance for its use as an outcome measure
Hall, Jennifer A; Barrett, Geraldine; Copas, Andrew; Stephenson, Judith
2017-01-01
Background The London Measure of Unplanned Pregnancy (LMUP) is a psychometrically validated measure of the degree of intention of a current or recent pregnancy. The LMUP is increasingly being used worldwide, and can be used to evaluate family planning or preconception care programs. However, beyond recommending the use of the full LMUP scale, there is no published guidance on how to use the LMUP as an outcome measure. Ordinal logistic regression has been recommended informally, but studies published to date have all used binary logistic regression and dichotomized the scale at different cut points. There is thus a need for evidence-based guidance to provide a standardized methodology for multivariate analysis and to enable comparison of results. This paper makes recommendations for the regression method for analysis of the LMUP as an outcome measure. Materials and methods Data collected from 4,244 pregnant women in Malawi were used to compare five regression methods: linear, logistic with two cut points, and ordinal logistic with either the full or grouped LMUP score. The recommendations were then tested on the original UK LMUP data. Results There were small but no important differences in the findings across the regression models. Logistic regression resulted in the largest loss of information, and assumptions were violated for the linear and ordinal logistic regression. Consequently, robust standard errors were used for linear regression and a partial proportional odds ordinal logistic regression model attempted. The latter could only be fitted for grouped LMUP score. Conclusion We recommend the linear regression model with robust standard errors to make full use of the LMUP score when analyzed as an outcome measure. Ordinal logistic regression could be considered, but a partial proportional odds model with grouped LMUP score may be required. Logistic regression is the least-favored option, due to the loss of information. For logistic regression, the cut point for un/planned pregnancy should be between nine and ten. These recommendations will standardize the analysis of LMUP data and enhance comparability of results across studies. PMID:28435343
Investigation of shipping accident injury severity and mortality.
Weng, Jinxian; Yang, Dong
2015-03-01
Shipping movements are operated in a complex and high-risk environment. Fatal shipping accidents are the nightmares of seafarers. With ten years' worldwide ship accident data, this study develops a binary logistic regression model and a zero-truncated binomial regression model to predict the probability of fatal shipping accidents and corresponding mortalities. The model results show that both the probability of fatal accidents and mortalities are greater for collision, fire/explosion, contact, grounding, sinking accidents occurred in adverse weather conditions and darkness conditions. Sinking has the largest effects on the increment of fatal accident probability and mortalities. The results also show that the bigger number of mortalities is associated with shipping accidents occurred far away from the coastal area/harbor/port. In addition, cruise ships are found to have more mortalities than non-cruise ships. The results of this study are beneficial for policy-makers in proposing efficient strategies to prevent fatal shipping accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.
Croker, Denise M; Hennigan, Michelle C; Maher, Anthony; Hu, Yun; Ryder, Alan G; Hodnett, Benjamin K
2012-04-07
Diffraction and spectroscopic methods were evaluated for quantitative analysis of binary powder mixtures of FII(6.403) and FIII(6.525) piracetam. The two polymorphs of piracetam could be distinguished using powder X-ray diffraction (PXRD), Raman and near-infrared (NIR) spectroscopy. The results demonstrated that Raman and NIR spectroscopy are most suitable for quantitative analysis of this polymorphic mixture. When the spectra are treated with the combination of multiplicative scatter correction (MSC) and second derivative data pretreatments, the partial least squared (PLS) regression model gave a root mean square error of calibration (RMSEC) of 0.94 and 0.99%, respectively. FIII(6.525) demonstrated some preferred orientation in PXRD analysis, making PXRD the least preferred method of quantification. Copyright © 2012 Elsevier B.V. All rights reserved.
2013-01-01
Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699
NASA Astrophysics Data System (ADS)
Cao, Zhoujian; Han, Wen-Biao
2017-08-01
Binary black hole systems are among the most important sources for gravitational wave detection. They are also good objects for theoretical research for general relativity. A gravitational waveform template is important to data analysis. An effective-one-body-numerical-relativity (EOBNR) model has played an essential role in the LIGO data analysis. For future space-based gravitational wave detection, many binary systems will admit a somewhat orbit eccentricity. At the same time, the eccentric binary is also an interesting topic for theoretical study in general relativity. In this paper, we construct the first eccentric binary waveform model based on an effective-one-body-numerical-relativity framework. Our basic assumption in the model construction is that the involved eccentricity is small. We have compared our eccentric EOBNR model to the circular one used in the LIGO data analysis. We have also tested our eccentric EOBNR model against another recently proposed eccentric binary waveform model; against numerical relativity simulation results; and against perturbation approximation results for extreme mass ratio binary systems. Compared to numerical relativity simulations with an eccentricity as large as about 0.2, the overlap factor for our eccentric EOBNR model is better than 0.98 for all tested cases, including spinless binary and spinning binary, equal mass binary, and unequal mass binary. Hopefully, our eccentric model can be the starting point to develop a faithful template for future space-based gravitational wave detectors.
Wilson, Asa B; Kerr, Bernard J; Bastian, Nathaniel D; Fulton, Lawrence V
2012-01-01
From 1980 to 1999, rural designated hospitals closed at a disproportionally high rate. In response to this emergent threat to healthcare access in rural settings, the Balanced Budget Act of 1997 made provisions for the creation of a new rural hospital--the critical access hospital (CAH). The conversion to CAH and the associated cost-based reimbursement scheme significantly slowed the closure rate of rural hospitals. This work investigates which methods can ensure the long-term viability of small hospitals. This article uses a two-step design to focus on a hypothesized relationship between technical efficiency of CAHs and a recently developed set of financial monitors for these entities. The goal is to identify the financial performance measures associated with efficiency. The first step uses data envelopment analysis (DEA) to differentiate efficient from inefficient facilities within a data set of 183 CAHs. Determining DEA efficiency is an a priori categorization of hospitals in the data set as efficient or inefficient. In the second step, DEA efficiency is the categorical dependent variable (efficient = 0, inefficient = 1) in the subsequent binary logistic regression (LR) model. A set of six financial monitors selected from the array of 20 measures were the LR independent variables. We use a binary LR to test the null hypothesis that recently developed CAH financial indicators had no predictive value for categorizing a CAH as efficient or inefficient, (i.e., there is no relationship between DEA efficiency and fiscal performance).
ERIC Educational Resources Information Center
Osborne, Jason W.
2012-01-01
Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These…
Missing Data in Alcohol Clinical Trials with Binary Outcomes
Hallgren, Kevin A.; Witkiewitz, Katie; Kranzler, Henry R.; Falk, Daniel E.; Litten, Raye Z.; O’Malley, Stephanie S.; Anton, Raymond F.
2017-01-01
Background Missing data are common in alcohol clinical trials for both continuous and binary endpoints. Approaches to handle missing data have been explored for continuous outcomes, yet no studies have compared missing data approaches for binary outcomes (e.g., abstinence, no heavy drinking days). The present study compares approaches to modeling binary outcomes with missing data in the COMBINE study. Method We included participants in the COMBINE Study who had complete drinking data during treatment and who were assigned to active medication or placebo conditions (N=1146). Using simulation methods, missing data were introduced under common scenarios with varying sample sizes and amounts of missing data. Logistic regression was used to estimate the effect of naltrexone (vs. placebo) in predicting any drinking and any heavy drinking outcomes at the end of treatment using four analytic approaches: complete case analysis (CCA), last observation carried forward (LOCF), the worst-case scenario of missing equals any drinking or heavy drinking (WCS), and multiple imputation (MI). In separate analyses, these approaches were compared when drinking data were manually deleted for those participants who discontinued treatment but continued to provide drinking data. Results WCS produced the greatest amount of bias in treatment effect estimates. MI usually yielded less biased estimates than WCS and CCA in the simulated data, and performed considerably better than LOCF when estimating treatment effects among individuals who discontinued treatment. Conclusions Missing data can introduce bias in treatment effect estimates in alcohol clinical trials. Researchers should utilize modern missing data methods, including MI, and avoid WCS and CCA when analyzing binary alcohol clinical trial outcomes. PMID:27254113
Missing Data in Alcohol Clinical Trials with Binary Outcomes.
Hallgren, Kevin A; Witkiewitz, Katie; Kranzler, Henry R; Falk, Daniel E; Litten, Raye Z; O'Malley, Stephanie S; Anton, Raymond F
2016-07-01
Missing data are common in alcohol clinical trials for both continuous and binary end points. Approaches to handle missing data have been explored for continuous outcomes, yet no studies have compared missing data approaches for binary outcomes (e.g., abstinence, no heavy drinking days). This study compares approaches to modeling binary outcomes with missing data in the COMBINE study. We included participants in the COMBINE study who had complete drinking data during treatment and who were assigned to active medication or placebo conditions (N = 1,146). Using simulation methods, missing data were introduced under common scenarios with varying sample sizes and amounts of missing data. Logistic regression was used to estimate the effect of naltrexone (vs. placebo) in predicting any drinking and any heavy drinking outcomes at the end of treatment using 4 analytic approaches: complete case analysis (CCA), last observation carried forward (LOCF), the worst case scenario (WCS) of missing equals any drinking or heavy drinking, and multiple imputation (MI). In separate analyses, these approaches were compared when drinking data were manually deleted for those participants who discontinued treatment but continued to provide drinking data. WCS produced the greatest amount of bias in treatment effect estimates. MI usually yielded less biased estimates than WCS and CCA in the simulated data and performed considerably better than LOCF when estimating treatment effects among individuals who discontinued treatment. Missing data can introduce bias in treatment effect estimates in alcohol clinical trials. Researchers should utilize modern missing data methods, including MI, and avoid WCS and CCA when analyzing binary alcohol clinical trial outcomes. Copyright © 2016 by the Research Society on Alcoholism.
Is parenting style a predictor of suicide attempts in a representative sample of adolescents?
2014-01-01
Background Suicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents’ suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents. Methods In the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity. Results Three parental variables showed a relevant association with suicide attempts in adolescents – (all protective): mother’s warmth and father’s warmth in childhood and mother’s control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events. Conclusions Parenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk – as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD. PMID:24766881
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
The use of auxiliary variables in capture-recapture and removal experiments
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1984-01-01
The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.
America's Democracy Colleges: The Civic Engagement of Community College Students
ERIC Educational Resources Information Center
Angeli Newell, Mallory
2014-01-01
This study explored the civic engagement of current two- and four-year students to explore whether differences exist between the groups and what may explain the differences. Using binary logistic regression and Ordinary Least Squares regression it was found that community-based engagement was lower for two- than four-year students, though…
Jebamalar, Angelin A; Prabhat; Balakrishnapillai, Agiesh K; Parmeswaran, Narayanan; Dhiman, Pooja; Rajendiran, Soundravally
2016-07-01
To evaluate the diagnostic role of cerebrospinal fluid (CSF) ferritin and albumin index (AI = CSF albumin/serum albumin × 1000) in differentiating acute bacterial meningitis (ABM) from acute viral meningitis (AVM) in children. The study included 42 cases each of ABM and AVM in pediatric age group. Receiver operating characteristic (ROC) analysis was carried out for CSF ferritin and AI. Binary logistic regression was also done. CSF ferritin and AI were found significantly higher in ABM compared to AVM. Model obtained using AI and CSF ferritin along with conventional criteria is better than existing models.
Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard
2016-02-28
Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.
Sufficient Dimension Reduction for Longitudinally Measured Predictors
Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia
2013-01-01
We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635
Filius, Anika; Scheltens, Marjan; Bosch, Hans G.; van Doorn, Pieter A.; Stam, Henk J.; Hovius, Steven E.R.; Amadio, Peter C.; Selles, Ruud W.
2015-01-01
Dynamics of structures within the carpal tunnel may alter in carpal tunnel syndrome (CTS) due to fibrotic changes and increased carpal tunnel pressure. Ultrasound can visualize these potential changes, making ultrasound potentially an accurate diagnostic tool. To study this, we imaged the carpal tunnel of 113 patients and 42 controls. CTS severity was classified according to validated clinical and nerve conduction study (NCS) classifications. Transversal and longitudinal displacement and shape (changes) were calculated for the median nerve, tendons and surrounding tissue. To predict diagnostic value binary logistic regression modeling was applied. Reduced longitudinal nerve displacement (p≤0.019), increased nerve cross-sectional area (p≤0.006) and perimeter (p≤0.007), and a trend of relatively changed tendon displacements were seen in patients. Changes were more convincing when CTS was classified as more severe. Binary logistic modeling to diagnose CTS using ultrasound showed a sensitivity of 70-71% and specificity of 80-84%. In conclusion, CTS patients have altered dynamics of structures within the carpal tunnel. PMID:25865180
Selenium in irrigated agricultural areas of the western United States
Nolan, B.T.; Clark, M.L.
1997-01-01
A logistic regression model was developed to predict the likelihood that Se exceeds the USEPA chronic criterion for aquatic life (5 ??g/L) in irrigated agricultural areas of the western USA. Preliminary analysis of explanatory variables used in the model indicated that surface-water Se concentration increased with increasing dissolved solids (DS) concentration and with the presence of Upper Cretaceous, mainly marine sediment. The presence or absence of Cretaceous sediment was the major variable affecting Se concentration in surface-water samples from the National Irrigation Water Quality Program. Median Se concentration was 14 ??g/L in samples from areas underlain by Cretaceous sediments and < 1 ??g/L in samples from areas underlain by non-Cretaceous sediments. Wilcoxon rank sum tests indicated that elevated Se concentrations in samples from areas with Cretaceous sediments, irrigated areas, and from closed lakes and ponds were statistically significant. Spearman correlations indicated that Se was positively correlated with a binary geology variable (0.64) and DS (0.45). Logistic regression models indicated that the concentration of Se in surface water was almost certain to exceed the Environmental Protection Agency aquatic-life chronic criterion of 5 ??g/L when DS was greater than 3000 mg/L in areas with Cretaceous sediments. The 'best' logistic regression model correctly predicted Se exceedances and nonexceedances 84.4% of the time, and model sensitivity was 80.7%. A regional map of Cretaceous sediment showed the location of potential problem areas. The map and logistic regression model are tools that can be used to determine the potential for Se contamination of irrigated agricultural areas in the western USA.
Computational intelligence models to predict porosity of tablets using minimum features
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. PMID:28138223
Computational intelligence models to predict porosity of tablets using minimum features.
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space.
The extension of total gain (TG) statistic in survival models: properties and applications.
Choodari-Oskooei, Babak; Royston, Patrick; Parmar, Mahesh K B
2015-07-01
The results of multivariable regression models are usually summarized in the form of parameter estimates for the covariates, goodness-of-fit statistics, and the relevant p-values. These statistics do not inform us about whether covariate information will lead to any substantial improvement in prediction. Predictive ability measures can be used for this purpose since they provide important information about the practical significance of prognostic factors. R (2)-type indices are the most familiar forms of such measures in survival models, but they all have limitations and none is widely used. In this paper, we extend the total gain (TG) measure, proposed for a logistic regression model, to survival models and explore its properties using simulations and real data. TG is based on the binary regression quantile plot, otherwise known as the predictiveness curve. Standardised TG ranges from 0 (no explanatory power) to 1 ('perfect' explanatory power). The results of our simulations show that unlike many of the other R (2)-type predictive ability measures, TG is independent of random censoring. It increases as the effect of a covariate increases and can be applied to different types of survival models, including models with time-dependent covariate effects. We also apply TG to quantify the predictive ability of multivariable prognostic models developed in several disease areas. Overall, TG performs well in our simulation studies and can be recommended as a measure to quantify the predictive ability in survival models.
Kernel analysis of partial least squares (PLS) regression models.
Shinzawa, Hideyuki; Ritthiruangdej, Pitiporn; Ozaki, Yukihiro
2011-05-01
An analytical technique based on kernel matrix representation is demonstrated to provide further chemically meaningful insight into partial least squares (PLS) regression models. The kernel matrix condenses essential information about scores derived from PLS or principal component analysis (PCA). Thus, it becomes possible to establish the proper interpretation of the scores. A PLS model for the total nitrogen (TN) content in multiple Thai fish sauces is built with a set of near-infrared (NIR) transmittance spectra of the fish sauce samples. The kernel analysis of the scores effectively reveals that the variation of the spectral feature induced by the change in protein content is substantially associated with the total water content and the protein hydration. Kernel analysis is also carried out on a set of time-dependent infrared (IR) spectra representing transient evaporation of ethanol from a binary mixture solution of ethanol and oleic acid. A PLS model to predict the elapsed time is built with the IR spectra and the kernel matrix is derived from the scores. The detailed analysis of the kernel matrix provides penetrating insight into the interaction between the ethanol and the oleic acid.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440
Predicting outcome in severe traumatic brain injury using a simple prognostic model.
Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie
2014-06-17
Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.
Dahlstrom, Kristina R; Anderson, Karen S; Field, Matthew S; Chowell, Diego; Ning, Jing; Li, Nan; Wei, Qingyi; Li, Guojun; Sturgis, Erich M
2017-12-15
Because of the current epidemic of human papillomavirus (HPV)-related oropharyngeal cancer (OPC), a screening strategy is urgently needed. The presence of serum antibodies to HPV-16 early (E) antigens is associated with an increased risk for OPC. The purpose of this study was to evaluate the diagnostic accuracy of antibodies to a panel of HPV-16 E antigens in screening for OPC. This case-control study included 378 patients with OPC, 153 patients with nonoropharyngeal head and neck cancer (non-OPC), and 782 healthy control subjects. The tumor HPV status was determined with p16 immunohistochemistry and HPV in situ hybridization. HPV-16 E antibody levels in serum were identified with an enzyme-linked immunosorbent assay. A trained binary logistic regression model based on the combination of all E antigens was predefined and applied to the data set. The sensitivity and specificity of the assay for distinguishing HPV-related OPC from controls were calculated. Logistic regression analysis was used to calculate odds ratios with 95% confidence intervals for the association of head and neck cancer with the antibody status. Of the 378 patients with OPC, 348 had p16-positive OPC. HPV-16 E antibody levels were significantly higher among patients with p16-positive OPC but not among patients with non-OPC or among controls. Serology showed high sensitivity and specificity for HPV-related OPC (binary classifier: 83% sensitivity and 99% specificity for p16-positive OPC). A trained binary classification algorithm that incorporates information about multiple E antibodies has high sensitivity and specificity and may be advantageous for risk stratification in future screening trials. Cancer 2017;123:4886-94. © 2017 American Cancer Society. © 2017 American Cancer Society.
Cascaded face alignment via intimacy definition feature
NASA Astrophysics Data System (ADS)
Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin
2017-09-01
Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.
2011-01-01
Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357
Pedersen, Nicklas Juel; Jensen, David Hebbelstrup; Lelkaitis, Giedrius; Kiss, Katalin; Charabi, Birgitte; Specht, Lena; von Buchwald, Christian
2017-01-01
It is challenging to identify at diagnosis those patients with early oral squamous cell carcinoma (OSCC), who have a poor prognosis and those that have a high risk of harboring occult lymph node metastases. The aim of this study was to develop a standardized and objective digital scoring method to evaluate the predictive value of tumor budding. We developed a semi-automated image-analysis algorithm, Digital Tumor Bud Count (DTBC), to evaluate tumor budding. The algorithm was tested in 222 consecutive patients with early-stage OSCC and major endpoints were overall (OS) and progression free survival (PFS). We subsequently constructed and cross-validated a binary logistic regression model and evaluated its clinical utility by decision curve analysis. A high DTBC was an independent predictor of both poor OS and PFS in a multivariate Cox regression model. The logistic regression model was able to identify patients with occult lymph node metastases with an area under the curve (AUC) of 0.83 (95% CI: 0.78–0.89, P <0.001) and a 10-fold cross-validated AUC of 0.79. Compared to other known histopathological risk factors, the DTBC had a higher diagnostic accuracy. The proposed, novel risk model could be used as a guide to identify patients who would benefit from an up-front neck dissection. PMID:28212555
Formation enthalpies for transition metal alloys using machine learning
NASA Astrophysics Data System (ADS)
Ubaru, Shashanka; Miedlar, Agnieszka; Saad, Yousef; Chelikowsky, James R.
2017-06-01
The enthalpy of formation is an important thermodynamic property. Developing fast and accurate methods for its prediction is of practical interest in a variety of applications. Material informatics techniques based on machine learning have recently been introduced in the literature as an inexpensive means of exploiting materials data, and can be used to examine a variety of thermodynamics properties. We investigate the use of such machine learning tools for predicting the formation enthalpies of binary intermetallic compounds that contain at least one transition metal. We consider certain easily available properties of the constituting elements complemented by some basic properties of the compounds, to predict the formation enthalpies. We show how choosing these properties (input features) based on a literature study (using prior physics knowledge) seems to outperform machine learning based feature selection methods such as sensitivity analysis and LASSO (least absolute shrinkage and selection operator) based methods. A nonlinear kernel based support vector regression method is employed to perform the predictions. The predictive ability of our model is illustrated via several experiments on a dataset containing 648 binary alloys. We train and validate the model using the formation enthalpies calculated using a model by Miedema, which is a popular semiempirical model used for the prediction of formation enthalpies of metal alloys.
The effect of migration on social capital and depression among older adults in China.
Li, Qiuju; Zhou, Xudong; Ma, Sha; Jiang, Minmin; Li, Lu
2017-12-01
An estimated 9 million elderly people accompanied their adult children to urban areas in China, raising concerns about their social capital and mental health following re-location. The aim of this study was to examine the effect of migration on social capital and depression among this population. Multistage stratified cluster sampling was applied to recruit the migrant and urban elderly in Hangzhou from May to August, 2013. Data were collected from face-to-face interviews by trained college students using a standardized questionnaire. Social capital measurements included cognitive (generalized trust and reciprocity) and structure (support from individual and social contact) aspects. Depression was measured by Geriatric Depression Scale-30 (GDS-30). Chi-square tests and binary logistic regression models were used for analysis. A total of 1248 migrant elderly and 1322 urban elderly were eligible for analysis. After adjusting for a range of confounder factors, binary logistic regression models revealed that migrant elderly reported significantly lower levels of generalized trust [OR = 1.34, 95% CI (1.10-1.64)], reciprocity [OR = 1.55, 95% CI (1.29-1.87)], support from individual [OR = 1.96, 95% CI (1.61-2.38)] and social contact [OR = 3.27, 95% CI (2.70-3.97)]. In the full adjusted model, migrant elderly were more likely to be mentally unhealthy [OR = 1.85, 95% CI (1.44-2.36)] compared with urban elderly. Migrant elderly suffered from a lower mental health status and social capital than their urban counterparts in the emigrating city. Attention should focus on improving the social capital and mental health of this growing population.
Wu, Zheyang; Zhao, Hongyu
2012-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies.
Wu, Zheyang; Zhao, Hongyu
2013-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies. PMID:23956610
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
ERIC Educational Resources Information Center
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
Spertus, Jacob V; Normand, Sharon-Lise T
2018-04-23
High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines
NASA Astrophysics Data System (ADS)
Kuter, S.; Akyürek, Z.; Weber, G.-W.
2016-10-01
Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th
Predicting the Risk of Breakthrough Urinary Tract Infections: Primary Vesicoureteral Reflux.
Hidas, Guy; Billimek, John; Nam, Alexander; Soltani, Tandis; Kelly, Maryellen S; Selby, Blake; Dorgalli, Crystal; Wehbi, Elias; McAleer, Irene; McLorie, Gordon; Greenfield, Sheldon; Kaplan, Sherrie H; Khoury, Antoine E
2015-11-01
We constructed a risk prediction instrument stratifying patients with primary vesicoureteral reflux into groups according to their 2-year probability of breakthrough urinary tract infection. Demographic and clinical information was retrospectively collected in children diagnosed with primary vesicoureteral reflux and followed for 2 years. Bivariate and binary logistic regression analyses were performed to identify factors associated with breakthrough urinary tract infection. The final regression model was used to compute an estimation of the 2-year probability of breakthrough urinary tract infection for each subject. Accuracy of the binary classifier for breakthrough urinary tract infection was evaluated using receiver operator curve analysis. Three distinct risk groups were identified. The model was then validated in a prospective cohort. A total of 252 bivariate analyses showed that high grade (IV or V) vesicoureteral reflux (OR 9.4, 95% CI 3.8-23.5, p <0.001), presentation after urinary tract infection (OR 5.3, 95% CI 1.1-24.7, p = 0.034) and female gender (OR 2.6, 95% CI 0.097-7.11, p <0.054) were important risk factors for breakthrough urinary tract infection. Subgroup analysis revealed bladder and bowel dysfunction was a significant risk factor more pronounced in low grade (I to III) vesicoureteral reflux (OR 2.8, p = 0.018). The estimation model was applied for prospective validation, which demonstrated predicted vs actual 2-year breakthrough urinary tract infection rates of 19% vs 21%. Stratifying the patients into 3 risk groups based on parameters in the risk model showed 2-year risk for breakthrough urinary tract infection was 8.6%, 26.0% and 62.5% in the low, intermediate and high risk groups, respectively. This proposed risk stratification and probability model allows prediction of 2-year risk of patient breakthrough urinary tract infection to better inform parents of possible outcomes and treatment strategies. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Rispo, Antonio; Imperatore, Nicola; Testa, Anna; Bucci, Luigi; Luglio, Gaetano; De Palma, Giovanni Domenico; Rea, Matilde; Nardone, Olga Maria; Caporaso, Nicola; Castiglione, Fabiana
2018-03-08
In the management of Crohn's Disease (CD) patients, having a simple score combining clinical, endoscopic and imaging features to predict the risk of surgery could help to tailor treatment more effectively. AIMS: to prospectively evaluate the one-year risk factors for surgery in refractory/severe CD and to generate a risk matrix for predicting the probability of surgery at one year. CD patients needing a disease re-assessment at our tertiary IBD centre underwent clinical, laboratory, endoscopy and bowel sonography (BS) examinations within one week. The optimal cut-off values in predicting surgery were identified using ROC curves for Simple Endoscopic Score for CD (SES-CD), bowel wall thickness (BWT) at BS, and small bowel CD extension at BS. Binary logistic regression and Cox's regression were then carried out. Finally, the probabilities of surgery were calculated for selected baseline levels of covariates and results were arranged in a prediction matrix. Of 100 CD patients, 30 underwent surgery within one year. SES-CD©9 (OR 15.3; p<0.001), BWT©7 mm (OR 15.8; p<0.001), small bowel CD extension at BS©33 cm (OR 8.23; p<0.001) and stricturing/penetrating behavior (OR 4.3; p<0.001) were the only independent factors predictive of surgery at one-year based on binary logistic and Cox's regressions. Our matrix model combined these risk factors and the probability of surgery ranged from 0.48% to 87.5% (sixteen combinations). Our risk matrix combining clinical, endoscopic and ultrasonographic findings can accurately predict the one-year risk of surgery in patients with severe/refractory CD requiring a disease re-evaluation. This tool could be of value in clinical practice, serving as the basis for a tailored management of CD patients.
Park, Seon-Cheol; Lee, Min-Soo; Shinfuku, Naotaka; Sartorius, Norman; Park, Yong Chon
2015-09-01
The purpose of this study was to investigate whether there were gender-specific depressive symptom profiles or gender-specific patterns of psychotropic agent usage in Asian patients with depression. Clinical data from the Research on Asian Psychotropic Prescription Patterns for Antidepressant study (1171 depressed patients) were used to determine gender differences by analysis of covariates for continuous variables and by logistic regression analysis for discrete variables. In addition, a binary logistic regression model was fitted to identify independent clinical correlates of the gender-specific pattern on psychotropic drug usage. Men were more likely than women to have loss of interest (adjusted odds ratio = 1.379, p = 0.009), fatigue (adjusted odds ratio = 1.298, p = 0.033) and concurrent substance abuse (adjusted odds ratio = 3.793, p = 0.008), but gender differences in other symptom profiles and clinical features were not significant. Men were also more likely than women to be prescribed adjunctive therapy with a second-generation antipsychotic (adjusted odds ratio = 1.320, p = 0.044). However, men were less likely than women to have suicidal thoughts/acts (adjusted odds ratio = 0.724, p = 0.028). Binary logistic regression models revealed that lower age (odds ratio = 0.986, p = 0.027) and current hospitalization (odds ratio = 3.348, p < 0.0001) were independent clinical correlates of use of second-generation antipsychotics as adjunctive therapy for treating depressed Asian men. Unique gender-specific symptom profiles and gender-specific patterns of psychotropic drug usage can be identified in Asian patients with depression. Hence, ethnic and cultural influences on the gender preponderance of depression should be considered in the clinical psychiatry of Asian patients. © The Royal Australian and New Zealand College of Psychiatrists 2015.
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Collinearity and Causal Diagrams: A Lesson on the Importance of Model Specification.
Schisterman, Enrique F; Perkins, Neil J; Mumford, Sunni L; Ahrens, Katherine A; Mitchell, Emily M
2017-01-01
Correlated data are ubiquitous in epidemiologic research, particularly in nutritional and environmental epidemiology where mixtures of factors are often studied. Our objectives are to demonstrate how highly correlated data arise in epidemiologic research and provide guidance, using a directed acyclic graph approach, on how to proceed analytically when faced with highly correlated data. We identified three fundamental structural scenarios in which high correlation between a given variable and the exposure can arise: intermediates, confounders, and colliders. For each of these scenarios, we evaluated the consequences of increasing correlation between the given variable and the exposure on the bias and variance for the total effect of the exposure on the outcome using unadjusted and adjusted models. We derived closed-form solutions for continuous outcomes using linear regression and empirically present our findings for binary outcomes using logistic regression. For models properly specified, total effect estimates remained unbiased even when there was almost perfect correlation between the exposure and a given intermediate, confounder, or collider. In general, as the correlation increased, the variance of the parameter estimate for the exposure in the adjusted models increased, while in the unadjusted models, the variance increased to a lesser extent or decreased. Our findings highlight the importance of considering the causal framework under study when specifying regression models. Strategies that do not take into consideration the causal structure may lead to biased effect estimation for the original question of interest, even under high correlation.
NASA Astrophysics Data System (ADS)
Eldridge, J. J.; Stanway, E. R.; Xiao, L.; McClelland, L. A. S.; Taylor, G.; Ng, M.; Greis, S. M. L.; Bray, J. C.
2017-11-01
The Binary Population and Spectral Synthesis suite of binary stellar evolution models and synthetic stellar populations provides a framework for the physically motivated analysis of both the integrated light from distant stellar populations and the detailed properties of those nearby. We present a new version 2.1 data release of these models, detailing the methodology by which Binary Population and Spectral Synthesis incorporates binary mass transfer and its effect on stellar evolution pathways, as well as the construction of simple stellar populations. We demonstrate key tests of the latest Binary Population and Spectral Synthesis model suite demonstrating its ability to reproduce the colours and derived properties of resolved stellar populations, including well-constrained eclipsing binaries. We consider observational constraints on the ratio of massive star types and the distribution of stellar remnant masses. We describe the identification of supernova progenitors in our models, and demonstrate a good agreement to the properties of observed progenitors. We also test our models against photometric and spectroscopic observations of unresolved stellar populations, both in the local and distant Universe, finding that binary models provide a self-consistent explanation for observed galaxy properties across a broad redshift range. Finally, we carefully describe the limitations of our models, and areas where we expect to see significant improvement in future versions.
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevenson, Simon; Ohme, Frank; Fairhurst, Stephen, E-mail: simon.stevenson@ligo.org
2015-09-01
The coalescence of compact binaries containing neutron stars or black holes is one of the most promising signals for advanced ground-based laser interferometer gravitational-wave (GW) detectors, with the first direct detections expected over the next few years. The rate of binary coalescences and the distribution of component masses is highly uncertain, and population synthesis models predict a wide range of plausible values. Poorly constrained parameters in population synthesis models correspond to poorly understood astrophysics at various stages in the evolution of massive binary stars, the progenitors of binary neutron star and binary black hole systems. These include effects such asmore » supernova kick velocities, parameters governing the energetics of common envelope evolution and the strength of stellar winds. Observing multiple binary black hole systems through GWs will allow us to infer details of the astrophysical mechanisms that lead to their formation. Here we simulate GW observations from a series of population synthesis models including the effects of known selection biases, measurement errors and cosmology. We compare the predictions arising from different models and show that we will be able to distinguish between them with observations (or the lack of them) from the early runs of the advanced LIGO and Virgo detectors. This will allow us to narrow down the large parameter space for binary evolution models.« less
Sahoo, Debasis; Deck, Caroline; Yoganandan, Narayan; Willinger, Rémy
2013-12-01
A composite material model for skull, taking into account damage is implemented in the Strasbourg University finite element head model (SUFEHM) in order to enhance the existing skull mechanical constitutive law. The skull behavior is validated in terms of fracture patterns and contact forces by reconstructing 15 experimental cases. The new SUFEHM skull model is capable of reproducing skull fracture precisely. The composite skull model is validated not only for maximum forces, but also for lateral impact against actual force time curves from PMHS for the first time. Skull strain energy is found to be a pertinent parameter to predict the skull fracture and based on statistical (binary logistical regression) analysis it is observed that 50% risk of skull fracture occurred at skull strain energy of 544.0mJ. © 2013 Elsevier Ltd. All rights reserved.
van de Kassteele, Jan; Zwakhals, Laurens; Breugelmans, Oscar; Ameling, Caroline; van den Brink, Carolien
2017-07-01
Local policy makers increasingly need information on health-related indicators at smaller geographic levels like districts or neighbourhoods. Although more large data sources have become available, direct estimates of the prevalence of a health-related indicator cannot be produced for neighbourhoods for which only small samples or no samples are available. Small area estimation provides a solution, but unit-level models for binary-valued outcomes that can handle both non-linear effects of the predictors and spatially correlated random effects in a unified framework are rarely encountered. We used data on 26 binary-valued health-related indicators collected on 387,195 persons in the Netherlands. We associated the health-related indicators at the individual level with a set of 12 predictors obtained from national registry data. We formulated a structured additive regression model for small area estimation. The model captured potential non-linear relations between the predictors and the outcome through additive terms in a functional form using penalized splines and included a term that accounted for spatially correlated heterogeneity between neighbourhoods. The registry data were used to predict individual outcomes which in turn are aggregated into higher geographical levels, i.e. neighbourhoods. We validated our method by comparing the estimated prevalences with observed prevalences at the individual level and by comparing the estimated prevalences with direct estimates obtained by weighting methods at municipality level. We estimated the prevalence of the 26 health-related indicators for 415 municipalities, 2599 districts and 11,432 neighbourhoods in the Netherlands. We illustrate our method on overweight data and show that there are distinct geographic patterns in the overweight prevalence. Calibration plots show that the estimated prevalences agree very well with observed prevalences at the individual level. The estimated prevalences agree reasonably well with the direct estimates at the municipal level. Structured additive regression is a useful tool to provide small area estimates in a unified framework. We are able to produce valid nationwide small area estimates of 26 health-related indicators at neighbourhood level in the Netherlands. The results can be used for local policy makers to make appropriate health policy decisions.
Beyond the Binary: Dexterous Teaching and Knowing in Mathematics Education
ERIC Educational Resources Information Center
Adam, Raoul; Chigeza, Philemon
2015-01-01
This paper identifies binary oppositions in the discourse of mathematics education and introduces a binary-epistemic model for (re)conceptualising these oppositions and the epistemic-pedagogic problems they represent. The model is attentive to the contextual relationships between pedagogically relevant binaries (e.g., traditional/progressive,…
On the frequency of close binary systems among very low-mass stars and brown dwarfs
NASA Astrophysics Data System (ADS)
Maxted, P. F. L.; Jeffries, R. D.
2005-09-01
We have used Monte Carlo simulation techniques and published radial velocity surveys to constrain the frequency of very low-mass star (VLMS) and brown dwarf (BD) binary systems and their separation (a) distribution. Gaussian models for the separation distribution with a peak at a= 4au and 0.6 <=σlog(a/au)<= 1.0, correctly predict the number of observed binaries, yielding a close (a < 2.6au) binary frequency of 17-30 per cent and an overall VLMS/BD binary frequency of 32-45 per cent. We find that the available N-body models of VLMS/BD formation from dynamically decaying protostellar multiple systems are excluded at >99 per cent confidence because they predict too few close binary VLMS/BDs. The large number of close binaries and high overall binary frequency are also very inconsistent with recent smoothed particle hydrodynamical modelling and argue against a dynamical origin for VLMS/BDs.
A unifying framework for marginalized random intercept models of correlated binary outcomes
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.
2013-01-01
We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871
Evaluating uses of data mining techniques in propensity score estimation: a simulation study.
Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis
2008-06-01
In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.
Modeling the rate of HIV testing from repeated binary data amidst potential never-testers.
Rice, John D; Johnson, Brent A; Strawderman, Robert L
2018-01-04
Many longitudinal studies with a binary outcome measure involve a fraction of subjects with a homogeneous response profile. In our motivating data set, a study on the rate of human immunodeficiency virus (HIV) self-testing in a population of men who have sex with men (MSM), a substantial proportion of the subjects did not self-test during the follow-up study. The observed data in this context consist of a binary sequence for each subject indicating whether or not that subject experienced any events between consecutive observation time points, so subjects who never self-tested were observed to have a response vector consisting entirely of zeros. Conventional longitudinal analysis is not equipped to handle questions regarding the rate of events (as opposed to the odds, as in the classical logistic regression model). With the exception of discrete mixture models, such methods are also not equipped to handle settings in which there may exist a group of subjects for whom no events will ever occur, i.e. a so-called "never-responder" group. In this article, we model the observed data assuming that events occur according to some unobserved continuous-time stochastic process. In particular, we consider the underlying subject-specific processes to be Poisson conditional on some unobserved frailty, leading to a natural focus on modeling event rates. Specifically, we propose to use the power variance function (PVF) family of frailty distributions, which contains both the gamma and inverse Gaussian distributions as special cases and allows for the existence of a class of subjects having zero frailty. We generalize a computational algorithm developed for a log-gamma random intercept model (Conaway, 1990. A random effects model for binary data. Biometrics46, 317-328) to compute the exact marginal likelihood, which is then maximized to obtain estimates of model parameters. We conduct simulation studies, exploring the performance of the proposed method in comparison with competitors. Applying the PVF as well as a Gaussian random intercept model and a corresponding discrete mixture model to our motivating data set, we conclude that the group assigned to receive follow-up messages via SMS was self-testing at a significantly lower rate than the control group, but that there is no evidence to support the existence of a group of never-testers. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Anyalebechi, P. N.
Reported experimentally determined values of hydrogen solubility in liquid and solid Al-H and Al-H-X (where X = Cu, Si, Zn, Mg, Li, Fe or Ti) systems have been critically reviewed and analyzed in terms of Wagner's interaction parameter. An attempt has been made to use Wagner's interaction parameter and statistic linear regression models derived from reported hydrogen solubility limits for binary aluminum alloys to predict the hydrogen solubility limits in liquid and solid (commercial) multicomponent aluminum alloys. Reasons for the observed poor agreement between the predicted and experimentally determined hydrogen solubility limits are discussed.
NASA Astrophysics Data System (ADS)
Jacobson, Seth A.; Marzari, Francesco; Rossi, Alessandro; Scheeres, Daniel J.
2016-10-01
From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis is consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. These conclusions rest on the asteroid rotation model of Marzari et al. ([2011]Icarus, 214, 622-631), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis, described in detail within, and the binary evolution model of Jacobson et al. ([2011a] Icarus, 214, 161-178) and Jacobson et al. ([2011b] The Astrophysical Journal Letters, 736, L19). Our complete asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the contact binary fraction. We find that in order for the model to best match observations, rotational fission produces high mass ratio (> 0.2) binary components with four to eight times the frequency as low mass ratio (<0.2) components, where the mass ratio is the mass of the secondary component divided by the mass of the primary component. This is consistent with post-rotational fission binary system mass ratio being drawn from either a flat or a positive and shallow distribution, since the high mass ratio bin is four times the size of the low mass ratio bin; this is in contrast to the observed steady-state binary mass ratio, which has a negative and steep distribution. This can be understood in the context of the BYORP-tidal equilibrium hypothesis, which predicts that low mass ratio binaries survive for a significantly longer period of time than high mass ratio systems. We also find that the mean of the log-normal BYORP coefficient distribution μB ≳10-2 , which is consistent with estimates from shape modeling (McMahon and Scheeres, 2012a).
[Overload in the informal caregivers of patients with multiple comorbidities in an urban area].
Álvarez-Tello, Margarita; Casado-Mejía, Rosa; Ortega-Calvo, Manuel; Ruiz-Arias, Esperanza
2012-01-01
The aim of the study was, to determine the profile of the family caregiver of patients with multiple pathologies, identify factors associated with overload, and construct predictive models using items from the Caregiver Strain Index (CSI). A cross-sectional study of caregivers of patients with multiple comorbidities who attended an urban health centre. Data were collected from health records and questionnaires (Barthel index, Pfeiffer index, and CSI). Statistical analysis was performed using measures of central tendency and dispersion, and by building multivariate models with binary logistic regression with the CSI items as predictors (program R version 2.14.0). The sample included 67 caregivers, with a mean age of 64.69 years (standard deviation=12.71, median 62 years), of whom 74.6% were women, 35.8% were wives, and 32.8% were daughters. The level of dependence of the patients cared for was total/severe in 77.6%, and moderate in 12% (Barthel), and 47.8% had some level of cognitive impairment (Pfeiffer). A CSI equal or greater than 7 was seen in 47.8% of caregivers, identifying life problems in more than 40% of them such as, restriction of social life, physical exertion, discomfort with change, bad behaviour, personal and family emotional changes, and sleep disturbances. Item 4 of the CSI, analysing the social restriction, was the one that showed a greater significance in the predictive multivariate model. Item 12 (economic burden) was the most significant with age in patients with cognitive impairment. Women tend to take the role of caregiver at an earlier age than men in the urban environment studied, and items from CSI showed that items 4 (social restrictions) and 12 (economic burden) have more significance in the predictive models constructed with Binary Logistic Regression. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre
2011-02-16
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.
Automated particle identification through regression analysis of size, shape and colour
NASA Astrophysics Data System (ADS)
Rodriguez Luna, J. C.; Cooper, J. M.; Neale, S. L.
2016-04-01
Rapid point of care diagnostic tests and tests to provide therapeutic information are now available for a range of specific conditions from the measurement of blood glucose levels for diabetes to card agglutination tests for parasitic infections. Due to a lack of specificity these test are often then backed up by more conventional lab based diagnostic methods for example a card agglutination test may be carried out for a suspected parasitic infection in the field and if positive a blood sample can then be sent to a lab for confirmation. The eventual diagnosis is often achieved by microscopic examination of the sample. In this paper we propose a computerized vision system for aiding in the diagnostic process; this system used a novel particle recognition algorithm to improve specificity and speed during the diagnostic process. We will show the detection and classification of different types of cells in a diluted blood sample using regression analysis of their size, shape and colour. The first step is to define the objects to be tracked by a Gaussian Mixture Model for background subtraction and binary opening and closing for noise suppression. After subtracting the objects of interest from the background the next challenge is to predict if a given object belongs to a certain category or not. This is a classification problem, and the output of the algorithm is a Boolean value (true/false). As such the computer program should be able to "predict" with reasonable level of confidence if a given particle belongs to the kind we are looking for or not. We show the use of a binary logistic regression analysis with three continuous predictors: size, shape and color histogram. The results suggest this variables could be very useful in a logistic regression equation as they proved to have a relatively high predictive value on their own.
Is the perceived placebo effect comparable between adults and children? A meta-regression analysis.
Janiaud, Perrine; Cornu, Catherine; Lajoinie, Audrey; Djemli, Amina; Cucherat, Michel; Kassai, Behrouz
2017-01-01
A potential larger perceived placebo effect in children compared with adults could influence the detection of the treatment effect and the extrapolation of the treatment benefit from adults to children. This study aims to explore this potential difference, using a meta-epidemiological approach. A systematic review of the literature was done to identify trials included in meta-analyses evaluating a drug intervention with separate data for adults and children. The standardized mean change and the proportion of responders (binary outcomes) were used to calculate the perceived placebo effect. A meta-regression analysis was conducted to test for the difference between adults and children of the perceived placebo effect. For binary outcomes, the perceived placebo effect was significantly more favorable in children compared with adults (β = 0.13; P = 0.001). Parallel group trials (β = -1.83; P < 0.001), subjective outcomes (β = -0.76; P < 0.001), and the disease type significantly influenced the perceived placebo effect. The perceived placebo effect is different between adults and children for binary outcomes. This difference seems to be influenced by the design, the disease, and outcomes. Calibration of new studies for children should consider cautiously the placebo effect in children.
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Analysis of the statistical thermodynamic model for nonlinear binary protein adsorption equilibria.
Zhou, Xiao-Peng; Su, Xue-Li; Sun, Yan
2007-01-01
The statistical thermodynamic (ST) model was used to study nonlinear binary protein adsorption equilibria on an anion exchanger. Single-component and binary protein adsorption isotherms of bovine hemoglobin (Hb) and bovine serum albumin (BSA) on DEAE Spherodex M were determined by batch adsorption experiments in 10 mM Tris-HCl buffer containing a specific NaCl concentration (0.05, 0.10, and 0.15 M) at pH 7.40. The ST model was found to depict the effect of ionic strength on the single-component equilibria well, with model parameters depending on ionic strength. Moreover, the ST model gave acceptable fitting to the binary adsorption data with the fitted single-component model parameters, leading to the estimation of the binary ST model parameter. The effects of ionic strength on the model parameters are reasonably interpreted by the electrostatic and thermodynamic theories. The effective charge of protein in adsorption phase can be separately calculated from the two categories of the model parameters, and the values obtained from the two methods are consistent. The results demonstrate the utility of the ST model for describing nonlinear binary protein adsorption equilibria.
Galaxy Rotation and Rapid Supermassive Binary Coalescence
NASA Astrophysics Data System (ADS)
Holley-Bockelmann, Kelly; Khan, Fazeel Mahmood
2015-09-01
Galaxy mergers usher the supermassive black hole (SMBH) in each galaxy to the center of the potential, where they form an SMBH binary. The binary orbit shrinks by ejecting stars via three-body scattering, but ample work has shown that in spherical galaxy models, the binary separation stalls after ejecting all the stars in its loss cone—this is the well-known final parsec problem. However, it has been shown that SMBH binaries in non-spherical galactic nuclei harden at a nearly constant rate until reaching the gravitational wave regime. Here we use a suite of direct N-body simulations to follow SMBH binary evolution in both corotating and counterrotating flattened galaxy models. For N > 500 K, we find that the evolution of the SMBH binary is convergent and is independent of the particle number. Rotation in general increases the hardening rate of SMBH binaries even more effectively than galaxy geometry alone. SMBH binary hardening rates are similar for co- and counterrotating galaxies. In the corotating case, the center of mass of the SMBH binary settles into an orbit that is in corotation resonance with the background rotating model, and the coalescence time is roughly a few 100 Myr faster than a non-rotating flattened model. We find that counterrotation drives SMBHs to coalesce on a nearly radial orbit promptly after forming a hard binary. We discuss the implications for gravitational wave astronomy, hypervelocity star production, and the effect on the structure of the host galaxy.
GALAXY ROTATION AND RAPID SUPERMASSIVE BINARY COALESCENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holley-Bockelmann, Kelly; Khan, Fazeel Mahmood, E-mail: k.holley@vanderbilt.edu
2015-09-10
Galaxy mergers usher the supermassive black hole (SMBH) in each galaxy to the center of the potential, where they form an SMBH binary. The binary orbit shrinks by ejecting stars via three-body scattering, but ample work has shown that in spherical galaxy models, the binary separation stalls after ejecting all the stars in its loss cone—this is the well-known final parsec problem. However, it has been shown that SMBH binaries in non-spherical galactic nuclei harden at a nearly constant rate until reaching the gravitational wave regime. Here we use a suite of direct N-body simulations to follow SMBH binary evolutionmore » in both corotating and counterrotating flattened galaxy models. For N > 500 K, we find that the evolution of the SMBH binary is convergent and is independent of the particle number. Rotation in general increases the hardening rate of SMBH binaries even more effectively than galaxy geometry alone. SMBH binary hardening rates are similar for co- and counterrotating galaxies. In the corotating case, the center of mass of the SMBH binary settles into an orbit that is in corotation resonance with the background rotating model, and the coalescence time is roughly a few 100 Myr faster than a non-rotating flattened model. We find that counterrotation drives SMBHs to coalesce on a nearly radial orbit promptly after forming a hard binary. We discuss the implications for gravitational wave astronomy, hypervelocity star production, and the effect on the structure of the host galaxy.« less
Frank, Laurence E; Heiser, Willem J
2008-05-01
A set of features is the basis for the network representation of proximity data achieved by feature network models (FNMs). Features are binary variables that characterize the objects in an experiment, with some measure of proximity as response variable. Sometimes features are provided by theory and play an important role in the construction of the experimental conditions. In some research settings, the features are not known a priori. This paper shows how to generate features in this situation and how to select an adequate subset of features that takes into account a good compromise between model fit and model complexity, using a new version of least angle regression that restricts coefficients to be non-negative, called the Positive Lasso. It will be shown that features can be generated efficiently with Gray codes that are naturally linked to the FNMs. The model selection strategy makes use of the fact that FNM can be considered as univariate multiple regression model. A simulation study shows that the proposed strategy leads to satisfactory results if the number of objects is less than or equal to 22. If the number of objects is larger than 22, the number of features selected by our method exceeds the true number of features in some conditions.
Dorazio, Robert M.
2012-01-01
Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.
Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323
Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.
First Higher-Multipole Model of Gravitational Waves from Spinning and Coalescing Black-Hole Binaries
NASA Astrophysics Data System (ADS)
London, Lionel; Khan, Sebastian; Fauchon-Jones, Edward; García, Cecilio; Hannam, Mark; Husa, Sascha; Jiménez-Forteza, Xisco; Kalaghatgi, Chinmay; Ohme, Frank; Pannarale, Francesco
2018-04-01
Gravitational-wave observations of binary black holes currently rely on theoretical models that predict the dominant multipoles (ℓ=2 ,|m |=2 ) of the radiation during inspiral, merger, and ringdown. We introduce a simple method to include the subdominant multipoles to binary black hole gravitational waveforms, given a frequency-domain model for the dominant multipoles. The amplitude and phase of the original model are appropriately stretched and rescaled using post-Newtonian results (for the inspiral), perturbation theory (for the ringdown), and a smooth transition between the two. No additional tuning to numerical-relativity simulations is required. We apply a variant of this method to the nonprecessing PhenomD model. The result, PhenomHM, constitutes the first higher-multipole model of spinning and coalescing black-hole binaries, and currently includes the (ℓ,|m |)=(2 ,2 ),(3 ,3 ),(4 ,4 ),(2 ,1 ),(3 ,2 ),(4 ,3 ) radiative moments. Comparisons with numerical-relativity waveforms demonstrate that PhenomHM is more accurate than dominant-multipole-only models for all binary configurations, and typically improves the measurement of binary properties.
London, Lionel; Khan, Sebastian; Fauchon-Jones, Edward; García, Cecilio; Hannam, Mark; Husa, Sascha; Jiménez-Forteza, Xisco; Kalaghatgi, Chinmay; Ohme, Frank; Pannarale, Francesco
2018-04-20
Gravitational-wave observations of binary black holes currently rely on theoretical models that predict the dominant multipoles (ℓ=2,|m|=2) of the radiation during inspiral, merger, and ringdown. We introduce a simple method to include the subdominant multipoles to binary black hole gravitational waveforms, given a frequency-domain model for the dominant multipoles. The amplitude and phase of the original model are appropriately stretched and rescaled using post-Newtonian results (for the inspiral), perturbation theory (for the ringdown), and a smooth transition between the two. No additional tuning to numerical-relativity simulations is required. We apply a variant of this method to the nonprecessing PhenomD model. The result, PhenomHM, constitutes the first higher-multipole model of spinning and coalescing black-hole binaries, and currently includes the (ℓ,|m|)=(2,2),(3,3),(4,4),(2,1),(3,2),(4,3) radiative moments. Comparisons with numerical-relativity waveforms demonstrate that PhenomHM is more accurate than dominant-multipole-only models for all binary configurations, and typically improves the measurement of binary properties.
Ko, Gene M; Garg, Rajni; Bailey, Barbara A; Kumar, Sunil
2016-01-01
Quantitative structure-activity relationship (QSAR) models can be used as a predictive tool for virtual screening of chemical libraries to identify novel drug candidates. The aims of this paper were to report the results of a study performed for descriptor selection, QSAR model development, and virtual screening for identifying novel HIV-1 integrase inhibitor drug candidates. First, three evolutionary algorithms were compared for descriptor selection: differential evolution-binary particle swarm optimization (DE-BPSO), binary particle swarm optimization, and genetic algorithms. Next, three QSAR models were developed from an ensemble of multiple linear regression, partial least squares, and extremely randomized trees models. A comparison of the performances of three evolutionary algorithms showed that DE-BPSO has a significant improvement over the other two algorithms. QSAR models developed in this study were used in consensus as a predictive tool for virtual screening of the NCI Open Database containing 265,242 compounds to identify potential novel HIV-1 integrase inhibitors. Six compounds were predicted to be highly active (plC50 > 6) by each of the three models. The use of a hybrid evolutionary algorithm (DE-BPSO) for descriptor selection and QSAR model development in drug design is a novel approach. Consensus modeling may provide better predictivity by taking into account a broader range of chemical properties within the data set conducive for inhibition that may be missed by an individual model. The six compounds identified provide novel drug candidate leads in the design of next generation HIV- 1 integrase inhibitors targeting drug resistant mutant viruses.
Semiparametric time varying coefficient model for matched case-crossover studies.
Ortega-Villa, Ana Maria; Kim, Inyoung; Kim, H
2017-03-15
In matched case-crossover studies, it is generally accepted that the covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model. This is because any stratum effect is removed by the conditioning on the fixed number of sets of the case and controls in the stratum. Hence, the conditional logistic regression model is not able to detect any effects associated with the matching covariates by stratum. However, some matching covariates such as time often play an important role as an effect modification leading to incorrect statistical estimation and prediction. Therefore, we propose three approaches to evaluate effect modification by time. The first is a parametric approach, the second is a semiparametric penalized approach, and the third is a semiparametric Bayesian approach. Our parametric approach is a two-stage method, which uses conditional logistic regression in the first stage and then estimates polynomial regression in the second stage. Our semiparametric penalized and Bayesian approaches are one-stage approaches developed by using regression splines. Our semiparametric one stage approach allows us to not only detect the parametric relationship between the predictor and binary outcomes, but also evaluate nonparametric relationships between the predictor and time. We demonstrate the advantage of our semiparametric one-stage approaches using both a simulation study and an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis with drinking water turbidity. We also provide statistical inference for the semiparametric Bayesian approach using Bayes Factors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Influence of landscape-scale factors in limiting brook trout populations in Pennsylvania streams
Kocovsky, P.M.; Carline, R.F.
2006-01-01
Landscapes influence the capacity of streams to produce trout through their effect on water chemistry and other factors at the reach scale. Trout abundance also fluctuates over time; thus, to thoroughly understand how spatial factors at landscape scales affect trout populations, one must assess the changes in populations over time to provide a context for interpreting the importance of spatial factors. We used data from the Pennsylvania Fish and Boat Commission's fisheries management database to investigate spatial factors that affect the capacity of streams to support brook trout Salvelinus fontinalis and to provide models useful for their management. We assessed the relative importance of spatial and temporal variation by calculating variance components and comparing relative standard errors for spatial and temporal variation. We used binary logistic regression to predict the presence of harvestable-length brook trout and multiple linear regression to assess the mechanistic links between landscapes and trout populations and to predict population density. The variance in trout density among streams was equal to or greater than the temporal variation for several streams, indicating that differences among sites affect population density. Logistic regression models correctly predicted the absence of harvestable-length brook trout in 60% of validation samples. The r 2-value for the linear regression model predicting density was 0.3, indicating low predictive ability. Both logistic and linear regression models supported buffering capacity against acid episodes as an important mechanistic link between landscapes and trout populations. Although our models fail to predict trout densities precisely, their success at elucidating the mechanistic links between landscapes and trout populations, in concert with the importance of spatial variation, increases our understanding of factors affecting brook trout abundance and will help managers and private groups to protect and enhance populations of wild brook trout. ?? Copyright by the American Fisheries Society 2006.
Pitcher, Brandon; Alaqla, Ali; Noujeim, Marcel; Wealleans, James A; Kotsakis, Georgios; Chrepa, Vanessa
2017-03-01
Cone-beam computed tomographic (CBCT) analysis allows for 3-dimensional assessment of periradicular lesions and may facilitate preoperative periapical cyst screening. The purpose of this study was to develop and assess the predictive validity of a cyst screening method based on CBCT volumetric analysis alone or combined with designated radiologic criteria. Three independent examiners evaluated 118 presurgical CBCT scans from cases that underwent apicoectomies and had an accompanying gold standard histopathological diagnosis of either a cyst or granuloma. Lesion volume, density, and specific radiologic characteristics were assessed using specialized software. Logistic regression models with histopathological diagnosis as the dependent variable were constructed for cyst prediction, and receiver operating characteristic curves were used to assess the predictive validity of the models. A conditional inference binary decision tree based on a recursive partitioning algorithm was constructed to facilitate preoperative screening. Interobserver agreement was excellent for volume and density, but it varied from poor to good for the radiologic criteria. Volume and root displacement were strong predictors for cyst screening in all analyses. The binary decision tree classifier determined that if the volume of the lesion was >247 mm 3 , there was 80% probability of a cyst. If volume was <247 mm 3 and root displacement was present, cyst probability was 60% (78% accuracy). The good accuracy and high specificity of the decision tree classifier renders it a useful preoperative cyst screening tool that can aid in clinical decision making but not a substitute for definitive histopathological diagnosis after biopsy. Confirmatory studies are required to validate the present findings. Published by Elsevier Inc.
Shimizu, Ken; Nakaya, Naoki; Saito-Nakaya, Kumi; Akechi, Tatsuo; Ogawa, Asao; Fujisawa, Daisuke; Sone, Toshimasa; Yoshiuchi, Kazuhiro; Goto, Koichi; Iwasaki, Motoki; Tsugane, Shoichiro; Uchitomi, Yosuke
2015-05-01
Although various factors thought to be correlated with anxiety in cancer patients, relative importance of each factors were unknown. We tested our hypothesis that personality traits and coping styles explain anxiety in lung cancer patients to a greater extent than other factors. A total of 1334 consecutively recruited lung cancer patients were selected, and data on cancer-related variables, demographic characteristics, health behaviors, physical symptoms and psychological factors consisting of personality traits and coping styles were obtained. The participants were divided into groups with or without a significant anxiety using the Hospital Anxiety and Depression Scale-Anxiety, and a binary logistic regression analysis was used to identify factors correlated with significant anxiety using a multivariate model. Among the recruited patients, 440 (33.0%) had significant anxiety. The binary logistic regression analysis revealed a coefficient of determination (overall R(2)) of 39.0%, and the explanation for psychological factors was much higher (30.7%) than those for cancer-related variables (1.1%), demographic characteristics (2.1%), health behaviors (0.8%) and physical symptoms (4.3%). Four specific factors remained significant in a multivariate model. A neurotic personality trait, a coping style of helplessness/hopelessness, and a female sex were positively correlated with significant anxiety, while a coping style of fatalism was negatively correlated. Our hypothesis was supported, and anxiety was strongly linked with personality trait and coping style. As a clinical implication, the use of screening instruments to identify these factors and intervention for psychological crisis may be needed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stein, Marco; Misselwitz, Björn; Hamann, Gerhard F; Kolodziej, Malgorzata; Reinges, Marcus H T; Uhl, Eberhard
2016-04-01
Pre-treatment with antiplatelet agents is described to be a risk factor for mortality after spontaneous intracerebral hemorrhage (ICH). However, the impact of antithrombotic agents on mortality in patients who undergo hematoma evacuation compared to conservatively treated patients with ICH remains controversial. This analysis is based on a prospective registry for quality assurance in stroke care in the State of Hesse, Germany. Patients' data were collected between January 2008 and December 2012. Only patients with the diagnosis of spontaneous ICH were included (International Classification of Diseases 10th Revision codes I61.0-I61.9). Predictors of in-hospital mortality were determined by univariate analysis. Predictors with P<0.1 were included in a binary logistic regression model. The binary logistic regression model was adjusted for age, initial Glasgow Coma Score (GCS), the presence of intraventricular hemorrhage (IVH), and pre-ICH disability prior to ictus. In 8,421 patients with spontaneous ICH, pre-treatment with oral anticoagulants or antiplatelet agents was documented in 16.3% and 25.1%, respectively. Overall in-hospital mortality was 23.2%. In-hospital mortality was decreased in operatively treated patients compared to conservatively treated patients (11.6% versus 24.0%; P<0.001). Patients with antiplatelet pre-treatment had a significantly higher risk of death during the hospital stay after hematoma evacuation (odds ratio [OR]: 2.5; 95% confidence interval [CI]: 1.24-4.97; P=0.010) compared to patients without antiplatelet pre-treatment treatment (OR: 0.9; 95% CI: 0.79-1.09; P=0.376). In conclusion a higher rate of in-hospital mortality after pre-treatment with antiplatelet agents in combination with hematoma evacuation after spontaneous ICH was observed in the presented cohort. Copyright © 2015 Elsevier Ltd. All rights reserved.
Determinants of preventive oral health behaviour among senior dental students in Nigeria
2013-01-01
Background To study the association between oral health behaviour of senior dental students in Nigeria and their gender, age, knowledge of preventive care, and attitudes towards preventive dentistry. Methods Questionnaires were administered to 179 senior dental students in the six dental schools in Nigeria. The questionnaire obtained information on age, gender, oral self-care, knowledge of preventive dental care and attitudes towards preventive dentistry. Attending a dental clinic for check-up by a dentist or a classmate within the last year was defined as preventive care use. Students who performed oral self-care and attended dental clinic for check-ups were noted to have complied with recommended oral self-care. Chi-square test and binary logistic regression models were used for statistical analyses. Results More male respondents agreed that the use of fluoride toothpaste was more important than the tooth brushing technique for caries prevention (P < 0.001). While the use of dental floss was very low (7.3%), more females were more likely to report using dental floss (p=0.03). Older students were also more likely to comply with recommended oral self-care (p<0.001). In binary regression models, respondents who were younger (p=0.04) and those with higher knowledge of preventive dental care (p=0.008) were more likely to consume sugary snacks less than once a day. Conclusion Gender differences in the awareness of the superiority of using fluoridated toothpaste over brushing in caries prevention; and in the use of dental floss were observed. While older students were more likely to comply with recommended oral self-care measures, younger students with good knowledge of preventive dental care were more likely to consume sugary snacks less than once a day. PMID:23777298
Determinants of preventive oral health behaviour among senior dental students in Nigeria.
Folayan, Morenike O; Khami, Mohammad R; Folaranmi, Nkiru; Popoola, Bamidele O; Sofola, Oyinkan O; Ligali, Taofeek O; Esan, Ayodeji O; Orenuga, Omolola O
2013-06-18
To study the association between oral health behaviour of senior dental students in Nigeria and their gender, age, knowledge of preventive care, and attitudes towards preventive dentistry. Questionnaires were administered to 179 senior dental students in the six dental schools in Nigeria. The questionnaire obtained information on age, gender, oral self-care, knowledge of preventive dental care and attitudes towards preventive dentistry. Attending a dental clinic for check-up by a dentist or a classmate within the last year was defined as preventive care use. Students who performed oral self-care and attended dental clinic for check-ups were noted to have complied with recommended oral self-care. Chi-square test and binary logistic regression models were used for statistical analyses. More male respondents agreed that the use of fluoride toothpaste was more important than the tooth brushing technique for caries prevention (P < 0.001). While the use of dental floss was very low (7.3%), more females were more likely to report using dental floss (p=0.03). Older students were also more likely to comply with recommended oral self-care (p<0.001). In binary regression models, respondents who were younger (p=0.04) and those with higher knowledge of preventive dental care (p=0.008) were more likely to consume sugary snacks less than once a day. Gender differences in the awareness of the superiority of using fluoridated toothpaste over brushing in caries prevention; and in the use of dental floss were observed. While older students were more likely to comply with recommended oral self-care measures, younger students with good knowledge of preventive dental care were more likely to consume sugary snacks less than once a day.
Emission-line diagnostics of nearby H II regions including interacting binary populations
NASA Astrophysics Data System (ADS)
Xiao, Lin; Stanway, Elizabeth R.; Eldridge, J. J.
2018-06-01
We present numerical models of the nebular emission from H II regions around young stellar populations over a range of compositions and ages. The synthetic stellar populations include both single stars and interacting binary stars. We compare these models to the observed emission lines of 254 H II regions of 13 nearby spiral galaxies and 21 dwarf galaxies drawn from archival data. The models are created using the combination of the BPASS (Binary Population and Spectral Synthesis) code with the photoionization code CLOUDY to study the differences caused by the inclusion of interacting binary stars in the stellar population. We obtain agreement with the observed emission line ratios from the nearby star-forming regions and discuss the effect of binary-star evolution pathways on the nebular ionization of H II regions. We find that at population ages above 10 Myr, single-star models rapidly decrease in flux and ionization strength, while binary-star models still produce strong flux and high [O III]/H β ratios. Our models can reproduce the metallicity of H II regions from spiral galaxies, but we find higher metallicities than previously estimated for the H II regions from dwarf galaxies. Comparing the equivalent width of H β emission between models and observations, we find that accounting for ionizing photon leakage can affect age estimates for H II regions. When it is included, the typical age derived for H II regions is 5 Myr from single-star models, and up to 10 Myr with binary-star models. This is due to the existence of binary-star evolution pathways, which produce more hot Wolf-Rayet and helium stars at older ages. For future reference, we calculate new BPASS binary maximal starburst lines as a function of metallicity, and for the total model population, and present these in Appendix A.
ERIC Educational Resources Information Center
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D.
2014-01-01
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
Formation and Evolution of X-ray Binaries
NASA Astrophysics Data System (ADS)
Fragkos, Anastasios
X-ray binaries - mass-transferring binary stellar systems with compact object accretors - are unique astrophysical laboratories. They carry information about many complex physical processes such as star formation, compact object formation, and evolution of interacting binaries. My thesis work involves the study of the formation and evolution of Galactic and extra-galacticX-ray binaries using both detailed and realistic simulation tools, and population synthesis techniques. I applied an innovative analysis method that allows the reconstruction of the full evolutionary history of known black hole X-ray binaries back to the time of compact object formation. This analysis takes into account all the available observationally determined properties of a system, and models in detail four of its evolutionary evolutionary phases: mass transfer through the ongoing X-ray phase, tidal evolution before the onset of Roche-lobe overflow, motion through the Galactic potential after the formation of the black hole, and binary orbital dynamics at the time of core collapse. Motivated by deep extra-galactic Chandra survey observations, I worked on population synthesis models of low-mass X-ray binaries in the two elliptical galaxies NGC3379 and NGC4278. These simulations were targeted at understanding the origin of the shape and normalization of the observed X-ray luminosity functions. In a follow up study, I proposed a physically motivated prescription for the modeling of transient neutron star low-mass X-ray binary properties, such as duty cycle, outburst duration and recurrence time. This prescription enabled the direct comparison of transient low-mass X-ray binary population synthesis models to the Chandra X-ray survey of the two ellipticals NGC3379 and NGC4278. Finally, I worked on population synthesismodels of black holeX-ray binaries in the MilkyWay. This work was motivated by recent developments in observational techniques for the measurement of black hole spin magnitudes in black hole X-ray binaries. The accuracy of these techniques depend on misalignment of the black hole spin with respect to the orbital angular momentum. In black hole X-ray binaries, this misalignment can occur during the supernova explosion that forms the compact object. In this study, I presented population synthesis models of Galactic black hole X-ray binaries, and examined the distribution of misalignment angles, and its dependence on the model parameters.
Use of antidementia drugs in frontotemporal lobar degeneration.
López-Pousa, Secundino; Calvó-Perxas, Laia; Lejarreta, Saioa; Cullell, Marta; Meléndez, Rosa; Hernández, Erélido; Bisbe, Josep; Perkal, Héctor; Manzano, Anna; Roig, Anna Maria; Turró-Garriga, Oriol; Vilalta-Franch, Joan; Garre-Olmo, Josep
2012-06-01
Clinical evidence indicates that acetylcholinesterase inhibitors (AChEIs) are not efficacious to treat frontotemporal lobar degeneration (FTLD). The British Association for Psychopharmacology recommends avoiding the use of AChEI and memantine in patients with FTLD. Cross-sectional design using 1092 cases with Alzheimer's disease (AD) and 64 cases with FTLD registered by the Registry of Dementias of Girona. Bivariate analyses were performed, and binary logistic regressions were used to detect variables associated with antidementia drugs consumption. The AChEIs were consumed by 57.6% and 42.2% of the patients with AD and FTLD, respectively. Memantine was used by 17.2% and 10.9% of patients with AD and FTLD, respectively. Binary logistic regressions yielded no associations with antidementia drugs consumption. There is a discrepancy regarding clinical practice and the recommendations based upon clinical evidence. The increased central nervous system drug use detected in FTLD requires multicentric studies aiming at finding the best means to treat these patients.
Doubly Robust Additive Hazards Models to Estimate Effects of a Continuous Exposure on Survival.
Wang, Yan; Lee, Mihye; Liu, Pengfei; Shi, Liuhua; Yu, Zhi; Abu Awad, Yara; Zanobetti, Antonella; Schwartz, Joel D
2017-11-01
The effect of an exposure on survival can be biased when the regression model is misspecified. Hazard difference is easier to use in risk assessment than hazard ratio and has a clearer interpretation in the assessment of effect modifications. We proposed two doubly robust additive hazards models to estimate the causal hazard difference of a continuous exposure on survival. The first model is an inverse probability-weighted additive hazards regression. The second model is an extension of the doubly robust estimator for binary exposures by categorizing the continuous exposure. We compared these with the marginal structural model and outcome regression with correct and incorrect model specifications using simulations. We applied doubly robust additive hazard models to the estimation of hazard difference of long-term exposure to PM2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 microns) on survival using a large cohort of 13 million older adults residing in seven states of the Southeastern United States. We showed that the proposed approaches are doubly robust. We found that each 1 μg m increase in annual PM2.5 exposure was associated with a causal hazard difference in mortality of 8.0 × 10 (95% confidence interval 7.4 × 10, 8.7 × 10), which was modified by age, medical history, socioeconomic status, and urbanicity. The overall hazard difference translates to approximately 5.5 (5.1, 6.0) thousand deaths per year in the study population. The proposed approaches improve the robustness of the additive hazards model and produce a novel additive causal estimate of PM2.5 on survival and several additive effect modifications, including social inequality.
A nonparametric multiple imputation approach for missing categorical data.
Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh
2017-06-06
Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.
Taylor, Stephen R; Simon, Joseph; Sampson, Laura
2017-05-05
We introduce a technique for gravitational-wave analysis, where Gaussian process regression is used to emulate the strain spectrum of a stochastic background by training on population-synthesis simulations. This leads to direct Bayesian inference on astrophysical parameters. For pulsar timing arrays specifically, we interpolate over the parameter space of supermassive black-hole binary environments, including three-body stellar scattering, and evolving orbital eccentricity. We illustrate our approach on mock data, and assess the prospects for inference with data similar to the NANOGrav 9-yr data release.
Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data
Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.
2014-01-01
In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438
NASA Astrophysics Data System (ADS)
Naguib, Ibrahim A.; Darwish, Hany W.
2012-02-01
A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.
Mesoscopic model for binary fluids
NASA Astrophysics Data System (ADS)
Echeverria, C.; Tucci, K.; Alvarez-Llamoza, O.; Orozco-Guillén, E. E.; Morales, M.; Cosenza, M. G.
2017-10-01
We propose a model for studying binary fluids based on the mesoscopic molecular simulation technique known as multiparticle collision, where the space and state variables are continuous, and time is discrete. We include a repulsion rule to simulate segregation processes that does not require calculation of the interaction forces between particles, so binary fluids can be described on a mesoscopic scale. The model is conceptually simple and computationally efficient; it maintains Galilean invariance and conserves the mass and energy in the system at the micro- and macro-scale, whereas momentum is conserved globally. For a wide range of temperatures and densities, the model yields results in good agreement with the known properties of binary fluids, such as the density profile, interface width, phase separation, and phase growth. We also apply the model to the study of binary fluids in crowded environments with consistent results.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
Modeling the binary circumstellar medium of Type IIb/L/n supernova progenitors
NASA Astrophysics Data System (ADS)
Kolb, Christopher; Blondin, John; Borkowski, Kazik; Reynolds, Stephen
2018-01-01
Circumstellar interaction in close binary systems can produce a highly asymmetric environment, particularly for systems with a mass outflow velocity comparable to the binary orbital speed. This asymmetric circumstellar medium (CSM) becomes visible after a supernova explosion, when SN radiation illuminates the gas and when SN ejecta collide with the CSM. We aim to better understand the development of this asymmetric CSM, particularly for binary systems containing a red supergiant progenitor, and to study its impact on supernova morphology. To achieve this, we model the asymmetric wind and subsequent supernova explosion in full 3D hydrodynamics using the shock-capturing hydro code VH-1 on a spherical yin-yang grid. Wind interaction is computed in a frame co-rotating with the binary system, and gas is accelerated using a radiation pressure-driven wind model where optical depth of the radiative force is dependent on azimuthally-averaged gas density. We present characterization of our asymmetric wind density distribution model by fitting a polar-to-equatorial density contrast function to free parameters such as binary separation distance, primary mass loss rate, and binary mass ratio.
Optimizing methods for linking cinematic features to fMRI data.
Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia
2015-04-15
One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved a feasible method for ordering the ICs based on their functional relevance to the annotated cinematic features. The novelty of our method is - in comparison to the hypothesis-driven manual pre-selection and observation of some individual regressors biased by choice - in applying data-driven approach to all content features simultaneously. We found especially the combination of regularized regression and ICA useful when analyzing fMRI data obtained using non-narrative movie stimulus with a large set of complex and correlated features. Copyright © 2015. Published by Elsevier Inc.
Binary Black Hole Mergers from Globular Clusters: Implications for Advanced LIGO.
Rodriguez, Carl L; Morscher, Meagan; Pattabiraman, Bharath; Chatterjee, Sourav; Haster, Carl-Johan; Rasio, Frederic A
2015-07-31
The predicted rate of binary black hole mergers from galactic fields can vary over several orders of magnitude and is extremely sensitive to the assumptions of stellar evolution. But in dense stellar environments such as globular clusters, binary black holes form by well-understood gravitational interactions. In this Letter, we study the formation of black hole binaries in an extensive collection of realistic globular cluster models. By comparing these models to observed Milky Way and extragalactic globular clusters, we find that the mergers of dynamically formed binaries could be detected at a rate of ∼100 per year, potentially dominating the binary black hole merger rate. We also find that a majority of cluster-formed binaries are more massive than their field-formed counterparts, suggesting that Advanced LIGO could identify certain binaries as originating from dense stellar environments.
Synthetic Survey of the Kepler Field
NASA Astrophysics Data System (ADS)
Wells, Mark; Prša, Andrej
2018-01-01
In the era of large scale surveys, including LSST and Gaia, binary population studies will flourish due to the large influx of data. In addition to probing binary populations as a function of galactic latitude, under-sampled groups such as low mass binaries will be observed at an unprecedented rate. To prepare for these missions, binary population simulations need to be carried out at high fidelity. These simulations will enable the creation of simulated data and, through comparison with real data, will allow the underlying binary parameter distributions to be explored. In order for the simulations to be considered robust, they should reproduce observed distributions accurately. To this end we have developed a simulator which takes input models and creates a synthetic population of eclipsing binaries. Starting from a galactic single star model, implemented using Galaxia, a code by Sharma et al. (2011), and applying observed multiplicity, mass-ratio, period, and eccentricity distributions, as reported by Raghavan et al. (2010), Duchêne & Kraus (2013), and Moe & Di Stefano (2017), we are able to generate synthetic binary surveys that correspond to any survey cadences. In order to calibrate our input models we compare the results of our synthesized eclipsing binary survey to the Kepler Eclipsing Binary catalog.
NASA Astrophysics Data System (ADS)
Lei, Zhenxin; Zhao, Gang; Zeng, Aihua; Shen, Lihua; Lan, Zhongjian; Jiang, Dengkai; Han, Zhanwen
2016-12-01
Employing tidally enhanced stellar wind, we studied in binaries the effects of metallicity, mass ratio of primary to secondary, tidal enhancement efficiency and helium abundance on the formation of blue hook (BHk) stars in globular clusters (GCs). A total of 28 sets of binary models combined with different input parameters are studied. For each set of binary model, we presented a range of initial orbital periods that is needed to produce BHk stars in binaries. All the binary models could produce BHk stars within different range of initial orbital periods. We also compared our results with the observation in the Teff-logg diagram of GC NGC 2808 and ω Cen. Most of the BHk stars in these two GCs locate well in the region predicted by our theoretical models, especially when C/N-enhanced model atmospheres are considered. We found that mass ratio of primary to secondary and tidal enhancement efficiency have little effects on the formation of BHk stars in binaries, while metallicity and helium abundance would play important roles, especially for helium abundance. Specifically, with helium abundance increasing in binary models, the space range of initial orbital periods needed to produce BHk stars becomes obviously wider, regardless of other input parameters adopted. Our results were discussed with recent observations and other theoretical models.
Are Binary Separations related to their System Mass?
NASA Astrophysics Data System (ADS)
Sterzik, M. F.; Durisen, R. H.
2004-08-01
We compile most recent multiplicity fractions and binary separation distributions for different primary masses, including very low-mass and brown dwarf primaries, and compare them with dynamical decay models of small-N clusters. The model predictions are based on detailed numerical calculations of the internal cluster dynamics, as well as on Monte-Carlo methods. Both observations and models reflect the same trends: (1) The multiplicity fraction is an increasing function of the primary mass. (2) The mean binary separations are increasing with the system mass in the sense that very low-mass binaries have average separations around ≈ 4AU, while the binary separation distribution for solar-type primaries peaks at ≈ 40AU. M-type binary systems apparently preferentially populate intermediate separations. Similar specific energy at the time of cluster formation for all cluster masses can possibly explain this trend.
Accuracy of binary black hole waveform models for aligned-spin binaries
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Chu, Tony; Fong, Heather; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-05-01
Coalescing binary black holes are among the primary science targets for second generation ground-based gravitational wave detectors. Reliable gravitational waveform models are central to detection of such systems and subsequent parameter estimation. This paper performs a comprehensive analysis of the accuracy of recent waveform models for binary black holes with aligned spins, utilizing a new set of 84 high-accuracy numerical relativity simulations. Our analysis covers comparable mass binaries (mass-ratio 1 ≤q ≤3 ), and samples independently both black hole spins up to a dimensionless spin magnitude of 0.9 for equal-mass binaries and 0.85 for unequal mass binaries. Furthermore, we focus on the high-mass regime (total mass ≳50 M⊙ ). The two most recent waveform models considered (PhenomD and SEOBNRv2) both perform very well for signal detection, losing less than 0.5% of the recoverable signal-to-noise ratio ρ , except that SEOBNRv2's efficiency drops slightly for both black hole spins aligned at large magnitude. For parameter estimation, modeling inaccuracies of the SEOBNRv2 model are found to be smaller than systematic uncertainties for moderately strong GW events up to roughly ρ ≲15 . PhenomD's modeling errors are found to be smaller than SEOBNRv2's, and are generally irrelevant for ρ ≲20 . Both models' accuracy deteriorates with increased mass ratio, and when at least one black hole spin is large and aligned. The SEOBNRv2 model shows a pronounced disagreement with the numerical relativity simulation in the merger phase, for unequal masses and simultaneously both black hole spins very large and aligned. Two older waveform models (PhenomC and SEOBNRv1) are found to be distinctly less accurate than the more recent PhenomD and SEOBNRv2 models. Finally, we quantify the bias expected from all four waveform models during parameter estimation for several recovered binary parameters: chirp mass, mass ratio, and effective spin.
High-mass X-ray binary populations. 1: Galactic modeling
NASA Technical Reports Server (NTRS)
Dalton, William W.; Sarazin, Craig L.
1995-01-01
Modern stellar evolutionary tracks are used to calculate the evolution of a very large number of massive binary star systems (M(sub tot) greater than or = 15 solar mass) which cover a wide range of total masses, mass ratios, and starting separations. Each binary is evolved accounting for mass and angular momentum loss through the supernova of the primary to the X-ray binary phase. Using the observed rate of star formation in our Galaxy and the properties of massive binaries, we calculate the expected high-mass X-ray binary (HMXRB) population in the Galaxy. We test various massive binary evolutionary scenarios by comparing the resulting HMXRB predictions with the X-ray observations. A major goal of this study is the determination of the fraction of matter lost from the system during the Roche lobe overflow phase. Curiously, we find that the total numbers of observable HMXRBs are nearly independent of this assumed mass-loss fraction, with any of the values tested here giving acceptable agreement between predicted and observed numbers. However, comparison of the period distribution of our HMXRB models with the observed period distribution does reveal a distinction among the various models. As a result of this comparison, we conclude that approximately 70% of the overflow matter is lost from a massive binary system during mass transfer in the Roche lobe overflow phase. We compare models constructed assuming that all X-ray emission is due to accretion onto the compact object from the donor star's wind with models that incorporate a simplified disk accretion scheme. By comparing the results of these models with observations, we conclude that the formation of disks in HMXRBs must be relatively common. We also calculate the rate of formation of double degenerate binaries, high velocity detached compact objects, and Thorne-Zytkow objects.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
Yin, Rulan; Cao, Haixia; Fu, Ting; Zhang, Qiuxiang; Zhang, Lijuan; Li, Liren; Gu, Zhifeng
2017-07-01
The aim of this study was to assess adherence rate and predictors of non-adherence with urate-lowering therapy (ULT) in Chinese gout patients. A cross-sectional study was administered to 125 gout patients using the Compliance Questionnaire on Rheumatology (CQR) for adherence to ULT. Patients were asked to complete the Treatment Satisfaction Questionnaire for Medication version II, Health Assessment Questionnaire, Confidence in Gout Treatment Questionnaire, Gout Knowledge Questionnaire, Patient Health Questionnaire-9, Generalized Anxiety Disorder-7, and 36-Item Short Form Health Survey. Data were analyzed by independent sample t test, rank sum test, Chi-square analysis as well as binary stepwise logistic regression modeling. The data showed that the rate of adherence (CQR ≥80%) to ULT was 9.6% in our investigated gout patients. Adherence was associated with functional capacity, gout-related knowledge, satisfaction with medication, confidence in gout treatment and mental components summary. Multivariable analysis of binary stepwise logistic regression identified gout-related knowledge and satisfaction of effectiveness with medication was the independent risk factors of medication non-adherence. Patients unaware of gout-related knowledge, or with low satisfaction of effectiveness with medication, were more likely not to adhere to ULT. Non-adherence to ULT among gout patients is exceedingly common, particularly in patients unaware of gout-related knowledge, or with low satisfaction of effectiveness with medication. These findings could help medical personnel develop useful interventions to improve gout patients' medication adherence.
Racial residential segregation and preterm birth: built environment as a mediator.
Anthopolos, Rebecca; Kaufman, Jay S; Messer, Lynne C; Miranda, Marie Lynn
2014-05-01
Racial residential segregation has been associated with preterm birth. Few studies have examined mediating pathways, in part because, with binary outcomes, indirect effects estimated from multiplicative models generally lack causal interpretation. We develop a method to estimate additive-scale natural direct and indirect effects from logistic regression. We then evaluate whether segregation operates through poor-quality built environment to affect preterm birth. To estimate natural direct and indirect effects, we derive risk differences from logistic regression coefficients. Birth records (2000-2008) for Durham, North Carolina, were linked to neighborhood-level measures of racial isolation and a composite construct of poor-quality built environment. We decomposed the total effect of racial isolation on preterm birth into direct and indirect effects. The adjusted total effect of an interquartile increase in racial isolation on preterm birth was an extra 27 preterm events per 1000 births (risk difference = 0.027 [95% confidence interval = 0.007 to 0.047]). With poor-quality built environment held at the level it would take under isolation at the 25th percentile, the direct effect of an interquartile increase in isolation was 0.022 (-0.001 to 0.042). Poor-quality built environment accounted for 35% (11% to 65%) of the total effect. Our methodology facilitates the estimation of additive-scale natural effects with binary outcomes. In this study, the total effect of racial segregation on preterm birth was partially mediated by poor-quality built environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geller, Aaron M.; Hurley, Jarrod R.; Mathieu, Robert D., E-mail: a-geller@northwestern.edu, E-mail: mathieu@astro.wisc.edu, E-mail: jhurley@astro.swin.edu.au
2013-01-01
Following on from a recently completed radial-velocity survey of the old (7 Gyr) open cluster NGC 188 in which we studied in detail the solar-type hard binaries and blue stragglers of the cluster, here we investigate the dynamical evolution of NGC 188 through a sophisticated N-body model. Importantly, we employ the observed binary properties of the young (180 Myr) open cluster M35, where possible, to guide our choices for parameters of the initial binary population. We apply pre-main-sequence tidal circularization and a substantial increase to the main-sequence tidal circularization rate, both of which are necessary to match the observed tidalmore » circularization periods in the literature, including that of NGC 188. At 7 Gyr the main-sequence solar-type hard-binary population in the model matches that of NGC 188 in both binary frequency and distributions of orbital parameters. This agreement between the model and observations is in a large part due to the similarities between the NGC 188 and M35 solar-type binaries. Indeed, among the 7 Gyr main-sequence binaries in the model, only those with P {approx}> 1000 days begin to show potentially observable evidence for modifications by dynamical encounters, even after 7 Gyr of evolution within the star cluster. This emphasizes the importance of defining accurate initial conditions for star cluster models, which we propose is best accomplished through comparisons with observations of young open clusters like M35. Furthermore, this finding suggests that observations of the present-day binaries in even old open clusters can provide valuable information on their primordial binary populations. However, despite the model's success at matching the observed solar-type main-sequence population, the model underproduces blue stragglers and produces an overabundance of long-period circular main-sequence-white-dwarf binaries as compared with the true cluster. We explore several potential solutions to the paucity of blue stragglers and conclude that the model dramatically underproduces blue stragglers through mass-transfer processes. We suggest that common-envelope evolution may have been incorrectly imposed on the progenitors of the spurious long-period circular main-sequence-white-dwarf binaries, which perhaps instead should have gone through stable mass transfer to create blue stragglers, thereby bringing both the number and binary frequency of the blue straggler population in the model into agreement with the true blue stragglers in NGC 188. Thus, improvements in the physics of mass transfer and common-envelope evolution employed in the model may in fact solve both discrepancies with the observations. This project highlights the unique accessibility of open clusters to both comprehensive observational surveys and full-scale N-body simulations, both of which have only recently matured sufficiently to enable such a project, and underscores the importance of open clusters to the study of star cluster dynamics.« less
Yan, Luchun; Liu, Jiemin; Qu, Chen; Gu, Xingye; Zhao, Xia
2015-01-28
In order to explore the odor interaction of binary odor mixtures, a series of odor intensity evaluation tests were performed using both individual components and binary mixtures of aldehydes. Based on the linear relation between the logarithm of odor activity value and odor intensity of individual substances, the relationship between concentrations of individual constituents and their joint odor intensity was investigated by employing a partial differential equation (PDE) model. The obtained results showed that the binary odor interaction was mainly influenced by the mixing ratio of two constituents, but not the concentration level of an odor sample. Besides, an extended PDE model was also proposed on the basis of the above experiments. Through a series of odor intensity matching tests for several different binary odor mixtures, the extended PDE model was proved effective at odor intensity prediction. Furthermore, odorants of the same chemical group and similar odor type exhibited similar characteristics in the binary odor interaction. The overall results suggested that the PDE model is a more interpretable way of demonstrating the odor interactions of binary odor mixtures.
Constraining Roche-Lobe Overflow Models Using the Hot-Subdwarf Wide Binary Population
NASA Astrophysics Data System (ADS)
Vos, Joris; Vučković, Maja
2017-12-01
One of the important issues regarding the final evolution of stars is the impact of binarity. A rich zoo of peculiar, evolved objects are born from the interaction between the loosely bound envelope of a giant, and the gravitational pull of a companion. However, binary interactions are not understood from first principles, and the theoretical models are subject to many assumptions. It is currently agreed upon that hot subdwarf stars can only be formed through binary interaction, either through common envelope ejection or stable Roche-lobe overflow (RLOF) near the tip of the red giant branch (RGB). These systems are therefore an ideal testing ground for binary interaction models. With our long term study of wide hot subdwarf (sdB) binaries we aim to improve our current understanding of stable RLOF on the RGB by comparing the results of binary population synthesis studies with the observed population. In this article we describe the current model and possible improvements, and which observables can be used to test different parts of the interaction model.
Baseline adjustments for binary data in repeated cross-sectional cluster randomized trials.
Nixon, R M; Thompson, S G
2003-09-15
Analysis of covariance models, which adjust for a baseline covariate, are often used to compare treatment groups in a controlled trial in which individuals are randomized. Such analysis adjusts for any baseline imbalance and usually increases the precision of the treatment effect estimate. We assess the value of such adjustments in the context of a cluster randomized trial with repeated cross-sectional design and a binary outcome. In such a design, a new sample of individuals is taken from the clusters at each measurement occasion, so that baseline adjustment has to be at the cluster level. Logistic regression models are used to analyse the data, with cluster level random effects to allow for different outcome probabilities in each cluster. We compare the estimated treatment effect and its precision in models that incorporate a covariate measuring the cluster level probabilities at baseline and those that do not. In two data sets, taken from a cluster randomized trial in the treatment of menorrhagia, the value of baseline adjustment is only evident when the number of subjects per cluster is large. We assess the generalizability of these findings by undertaking a simulation study, and find that increased precision of the treatment effect requires both large cluster sizes and substantial heterogeneity between clusters at baseline, but baseline imbalance arising by chance in a randomized study can always be effectively adjusted for. Copyright 2003 John Wiley & Sons, Ltd.
Is the cluster environment quenching the Seyfert activity in elliptical and spiral galaxies?
NASA Astrophysics Data System (ADS)
de Souza, R. S.; Dantas, M. L. L.; Krone-Martins, A.; Cameron, E.; Coelho, P.; Hattab, M. W.; de Val-Borro, M.; Hilbe, J. M.; Elliott, J.; Hagen, A.; COIN Collaboration
2016-09-01
We developed a hierarchical Bayesian model (HBM) to investigate how the presence of Seyfert activity relates to their environment, herein represented by the galaxy cluster mass, M200, and the normalized cluster centric distance, r/r200. We achieved this by constructing an unbiased sample of galaxies from the Sloan Digital Sky Survey, with morphological classifications provided by the Galaxy Zoo Project. A propensity score matching approach is introduced to control the effects of confounding variables: stellar mass, galaxy colour, and star formation rate. The connection between Seyfert-activity and environmental properties in the de-biased sample is modelled within an HBM framework using the so-called logistic regression technique, suitable for the analysis of binary data (e.g. whether or not a galaxy hosts an AGN). Unlike standard ordinary least square fitting methods, our methodology naturally allows modelling the probability of Seyfert-AGN activity in galaxies on their natural scale, I.e. as a binary variable. Furthermore, we demonstrate how an HBM can incorporate information of each particular galaxy morphological type in an unified framework. In elliptical galaxies our analysis indicates a strong correlation of Seyfert-AGN activity with r/r200, and a weaker correlation with the mass of the host cluster. In spiral galaxies these trends do not appear, suggesting that the link between Seyfert activity and the properties of spiral galaxies are independent of the environment.
Decoding memory features from hippocampal spiking activities using sparse classification models.
Dong Song; Hampson, Robert E; Robinson, Brian S; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W
2016-08-01
To understand how memory information is encoded in the hippocampus, we build classification models to decode memory features from hippocampal CA3 and CA1 spatio-temporal patterns of spikes recorded from epilepsy patients performing a memory-dependent delayed match-to-sample task. The classification model consists of a set of B-spline basis functions for extracting memory features from the spike patterns, and a sparse logistic regression classifier for generating binary categorical output of memory features. Results show that classification models can extract significant amount of memory information with respects to types of memory tasks and categories of sample images used in the task, despite the high level of variability in prediction accuracy due to the small sample size. These results support the hypothesis that memories are encoded in the hippocampal activities and have important implication to the development of hippocampal memory prostheses.
Close binary systems among very low-mass stars and brown dwarfs
NASA Astrophysics Data System (ADS)
Jeffries, R. D.; Maxted, P. F. L.
2005-12-01
Using Monte Carlo simulations and published radial velocity surveys we have constrained the frequency and separation (a) distribution of very low-mass star (VLM) and brown dwarf (BD) binary systems. We find that simple Gaussian extensions of the observed wide binary distribution, with a peak at 4 AU and 0.6<\\sigma_{\\log(a/AU)}<1.0, correctly reproduce the observed number of close binary systems, implying a close (a<2.6 AU) binary frequency of 17-30 % and overall frequency of 32-45 %. N-body models of the dynamical decay of unstable protostellar multiple systems are excluded with high confidence because they do not produce enough close binary VLMs/BDs. The large number of close binaries and high overall binary frequency are also completely inconsistent with published smoothed particle hydrodynamical modelling and argue against a dynamical origin for VLMs/BDs.
Can Emotional and Behavioral Dysregulation in Youth Be Decoded from Functional Neuroimaging?
Portugal, Liana C L; Rosa, Maria João; Rao, Anil; Bebko, Genna; Bertocci, Michele A; Hinze, Amanda K; Bonar, Lisa; Almeida, Jorge R C; Perlman, Susan B; Versace, Amelia; Schirda, Claudiu; Travis, Michael; Gill, Mary Kay; Demeter, Christine; Diwadkar, Vaibhav A; Ciuffetelli, Gary; Rodriguez, Eric; Forbes, Erika E; Sunshine, Jeffrey L; Holland, Scott K; Kowatch, Robert A; Birmaher, Boris; Axelson, David; Horwitz, Sarah M; Arnold, Eugene L; Fristad, Mary A; Youngstrom, Eric A; Findling, Robert L; Pereira, Mirtes; Oliveira, Leticia; Phillips, Mary L; Mourao-Miranda, Janaina
2016-01-01
High comorbidity among pediatric disorders characterized by behavioral and emotional dysregulation poses problems for diagnosis and treatment, and suggests that these disorders may be better conceptualized as dimensions of abnormal behaviors. Furthermore, identifying neuroimaging biomarkers related to dimensional measures of behavior may provide targets to guide individualized treatment. We aimed to use functional neuroimaging and pattern regression techniques to determine whether patterns of brain activity could accurately decode individual-level severity on a dimensional scale measuring behavioural and emotional dysregulation at two different time points. A sample of fifty-seven youth (mean age: 14.5 years; 32 males) was selected from a multi-site study of youth with parent-reported behavioral and emotional dysregulation. Participants performed a block-design reward paradigm during functional Magnetic Resonance Imaging (fMRI). Pattern regression analyses consisted of Relevance Vector Regression (RVR) and two cross-validation strategies implemented in the Pattern Recognition for Neuroimaging toolbox (PRoNTo). Medication was treated as a binary confounding variable. Decoded and actual clinical scores were compared using Pearson's correlation coefficient (r) and mean squared error (MSE) to evaluate the models. Permutation test was applied to estimate significance levels. Relevance Vector Regression identified patterns of neural activity associated with symptoms of behavioral and emotional dysregulation at the initial study screen and close to the fMRI scanning session. The correlation and the mean squared error between actual and decoded symptoms were significant at the initial study screen and close to the fMRI scanning session. However, after controlling for potential medication effects, results remained significant only for decoding symptoms at the initial study screen. Neural regions with the highest contribution to the pattern regression model included cerebellum, sensory-motor and fronto-limbic areas. The combination of pattern regression models and neuroimaging can help to determine the severity of behavioral and emotional dysregulation in youth at different time points.
Comparison of two gas chromatograph models and analysis of binary data
NASA Technical Reports Server (NTRS)
Keba, P. S.; Woodrow, P. T.
1972-01-01
The overall objective of the gas chromatograph system studies is to generate fundamental design criteria and techniques to be used in the optimum design of the system. The particular tasks currently being undertaken are the comparison of two mathematical models of the chromatograph and the analysis of binary system data. The predictions of two mathematical models, an equilibrium absorption model and a non-equilibrium absorption model exhibit the same weaknesses in their inability to predict chromatogram spreading for certain systems. The analysis of binary data using the equilibrium absorption model confirms that, for the systems considered, superposition of predicted single component behaviors is a first order representation of actual binary data. Composition effects produce non-idealities which limit the rigorous validity of superposition.
Kumar, Dhananjay; Singh, Alpana; Gaur, J P
2008-11-01
The sorption of Cu(II) and Pb(II) by Pithophora markedly decreased as the concentration of the secondary metal ion, Cu(II) or Pb(II), increased in the binary metal solution. However, the test alga showed a greater affinity to sorb Cu(II) than Pb(II) from the binary metal solution. Mono-component Freundlich, Langmuir, Redlich-Peterson and Sips isotherms successfully predicted the sorption of Cu(II) and Pb(II) from both single and binary metal solutions. None of the tested binary sorption isotherms could realistically predict Cu(II) and Pb(II) sorption capacity and affinity of the test alga for the binary metal solutions of varying composition, which mono-component isotherms could very well accomplish. Hence, mono-component isotherm modeling at different concentrations of the secondary metal ion seems to be a better option than binary isotherms for metal sorption from binary metal solution.
Constraining Accreting Binary Populations in Normal Galaxies
NASA Astrophysics Data System (ADS)
Lehmer, Bret; Hornschemeier, A.; Basu-Zych, A.; Fragos, T.; Jenkins, L.; Kalogera, V.; Ptak, A.; Tzanavaris, P.; Zezas, A.
2011-01-01
X-ray emission from accreting binary systems (X-ray binaries) uniquely probe the binary phase of stellar evolution and the formation of compact objects such as neutron stars and black holes. A detailed understanding of X-ray binary systems is needed to provide physical insight into the formation and evolution of the stars involved, as well as the demographics of interesting binary remnants, such as millisecond pulsars and gravitational wave sources. Our program makes wide use of Chandra observations and complementary multiwavelength data sets (through, e.g., the Spitzer Infrared Nearby Galaxies Survey [SINGS] and the Great Observatories Origins Deep Survey [GOODS]), as well as super-computing facilities, to provide: (1) improved calibrations for correlations between X-ray binary emission and physical properties (e.g., star-formation rate and stellar mass) for galaxies in the local Universe; (2) new physical constraints on accreting binary processes (e.g., common-envelope phase and mass transfer) through the fitting of X-ray binary synthesis models to observed local galaxy X-ray binary luminosity functions; (3) observational and model constraints on the X-ray evolution of normal galaxies over the last 90% of cosmic history (since z 4) from the Chandra Deep Field surveys and accreting binary synthesis models; and (4) predictions for deeper observations from forthcoming generations of X-ray telesopes (e.g., IXO, WFXT, and Gen-X) to provide a science driver for these missions. In this talk, we highlight the details of our program and discuss recent results.
2012-01-01
Background A discrete choice experiment (DCE) is a preference survey which asks participants to make a choice among product portfolios comparing the key product characteristics by performing several choice tasks. Analyzing DCE data needs to account for within-participant correlation because choices from the same participant are likely to be similar. In this study, we empirically compared some commonly-used statistical methods for analyzing DCE data while accounting for within-participant correlation based on a survey of patient preference for colorectal cancer (CRC) screening tests conducted in Hamilton, Ontario, Canada in 2002. Methods A two-stage DCE design was used to investigate the impact of six attributes on participants' preferences for CRC screening test and willingness to undertake the test. We compared six models for clustered binary outcomes (logistic and probit regressions using cluster-robust standard error (SE), random-effects and generalized estimating equation approaches) and three models for clustered nominal outcomes (multinomial logistic and probit regressions with cluster-robust SE and random-effects multinomial logistic model). We also fitted a bivariate probit model with cluster-robust SE treating the choices from two stages as two correlated binary outcomes. The rank of relative importance between attributes and the estimates of β coefficient within attributes were used to assess the model robustness. Results In total 468 participants with each completing 10 choices were analyzed. Similar results were reported for the rank of relative importance and β coefficients across models for stage-one data on evaluating participants' preferences for the test. The six attributes ranked from high to low as follows: cost, specificity, process, sensitivity, preparation and pain. However, the results differed across models for stage-two data on evaluating participants' willingness to undertake the tests. Little within-patient correlation (ICC ≈ 0) was found in stage-one data, but substantial within-patient correlation existed (ICC = 0.659) in stage-two data. Conclusions When small clustering effect presented in DCE data, results remained robust across statistical models. However, results varied when larger clustering effect presented. Therefore, it is important to assess the robustness of the estimates via sensitivity analysis using different models for analyzing clustered data from DCE studies. PMID:22348526
Dai, James Y.; Hughes, James P.
2012-01-01
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448
QSAR Modeling and Prediction of Drug-Drug Interactions.
Zakharov, Alexey V; Varlamova, Ekaterina V; Lagunin, Alexey A; Dmitriev, Alexander V; Muratov, Eugene N; Fourches, Denis; Kuz'min, Victor E; Poroikov, Vladimir V; Tropsha, Alexander; Nicklaus, Marc C
2016-02-01
Severe adverse drug reactions (ADRs) are the fourth leading cause of fatality in the U.S. with more than 100,000 deaths per year. As up to 30% of all ADRs are believed to be caused by drug-drug interactions (DDIs), typically mediated by cytochrome P450s, possibilities to predict DDIs from existing knowledge are important. We collected data from public sources on 1485, 2628, 4371, and 27,966 possible DDIs mediated by four cytochrome P450 isoforms 1A2, 2C9, 2D6, and 3A4 for 55, 73, 94, and 237 drugs, respectively. For each of these data sets, we developed and validated QSAR models for the prediction of DDIs. As a unique feature of our approach, the interacting drug pairs were represented as binary chemical mixtures in a 1:1 ratio. We used two types of chemical descriptors: quantitative neighborhoods of atoms (QNA) and simplex descriptors. Radial basis functions with self-consistent regression (RBF-SCR) and random forest (RF) were utilized to build QSAR models predicting the likelihood of DDIs for any pair of drug molecules. Our models showed balanced accuracy of 72-79% for the external test sets with a coverage of 81.36-100% when a conservative threshold for the model's applicability domain was applied. We generated virtually all possible binary combinations of marketed drugs and employed our models to identify drug pairs predicted to be instances of DDI. More than 4500 of these predicted DDIs that were not found in our training sets were confirmed by data from the DrugBank database.
Classification of Dust Days by Satellite Remotely Sensed Aerosol Products
NASA Technical Reports Server (NTRS)
Sorek-Hammer, M.; Cohen, A.; Levy, Robert C.; Ziv, B.; Broday, D. M.
2013-01-01
Considerable progress in satellite remote sensing (SRS) of dust particles has been seen in the last decade. From an environmental health perspective, such an event detection, after linking it to ground particulate matter (PM) concentrations, can proxy acute exposure to respirable particles of certain properties (i.e. size, composition, and toxicity). Being affected considerably by atmospheric dust, previous studies in the Eastern Mediterranean, and in Israel in particular, have focused on mechanistic and synoptic prediction, classification, and characterization of dust events. In particular, a scheme for identifying dust days (DD) in Israel based on ground PM10 (particulate matter of size smaller than 10 nm) measurements has been suggested, which has been validated by compositional analysis. This scheme requires information regarding ground PM10 levels, which is naturally limited in places with sparse ground-monitoring coverage. In such cases, SRS may be an efficient and cost-effective alternative to ground measurements. This work demonstrates a new model for identifying DD and non-DD (NDD) over Israel based on an integration of aerosol products from different satellite platforms (Moderate Resolution Imaging Spectroradiometer (MODIS) and Ozone Monitoring Instrument (OMI)). Analysis of ground-monitoring data from 2007 to 2008 in southern Israel revealed 67 DD, with more than 88 percent occurring during winter and spring. A Classification and Regression Tree (CART) model that was applied to a database containing ground monitoring (the dependent variable) and SRS aerosol product (the independent variables) records revealed an optimal set of binary variables for the identification of DD. These variables are combinations of the following primary variables: the calendar month, ground-level relative humidity (RH), the aerosol optical depth (AOD) from MODIS, and the aerosol absorbing index (AAI) from OMI. A logistic regression that uses these variables, coded as binary variables, demonstrated 93.2 percent correct classifications of DD and NDD. Evaluation of the combined CART-logistic regression scheme in an adjacent geographical region (Gush Dan) demonstrated good results. Using SRS aerosol products for DD and NDD, identification may enable us to distinguish between health, ecological, and environmental effects that result from exposure to these distinct particle populations.
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Studying Variance in the Galactic Ultra-compact Binary Population
NASA Astrophysics Data System (ADS)
Larson, Shane L.; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jumper, Peter H.; Fisher, Robert T., E-mail: robert.fisher@umassd.edu
2013-05-20
The formation of brown dwarfs (BDs) poses a key challenge to star formation theory. The observed dearth of nearby ({<=}5 AU) BD companions to solar mass stars, known as the BD desert, as well as the tendency for low-mass binary systems to be more tightly bound than stellar binaries, has been cited as evidence for distinct formation mechanisms for BDs and stars. In this paper, we explore the implications of the minimal hypothesis that BDs in binary systems originate via the same fundamental fragmentation mechanism as stars, within isolated, turbulent giant molecular cloud cores. We demonstrate analytically that the scalingmore » of specific angular momentum with turbulent core mass naturally gives rise to the BD desert, as well as wide BD binary systems. Further, we show that the turbulent core fragmentation model also naturally predicts that very low mass binary and BD/BD systems are more tightly bound than stellar systems. In addition, in order to capture the stochastic variation intrinsic to turbulence, we generate 10{sup 4} model turbulent cores with synthetic turbulent velocity fields to show that the turbulent fragmentation model accommodates a small fraction of binary BDs with wide separations, similar to observations. Indeed, the picture which emerges from the turbulent fragmentation model is that a single fragmentation mechanism may largely shape both stellar and BD binary distributions during formation.« less
Bayesian inference for unidirectional misclassification of a binary response trait.
Xia, Michelle; Gustafson, Paul
2018-03-15
When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero-inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification. Copyright © 2017 John Wiley & Sons, Ltd.
Global Positioning System (GPS) Precipitable Water in Forecasting Lightning at Spaceport Canaveral
NASA Technical Reports Server (NTRS)
Kehrer, Kristen C.; Graf, Brian; Roeder, William
2006-01-01
This paper evaluates the use of precipitable water (PW) from Global Positioning System (GPS) in lightning prediction. Additional independent verification of an earlier model is performed. This earlier model used binary logistic regression with the following four predictor variables optimally selected from a candidate list of 23 candidate predictors: the current precipitable water value for a given time of the day, the change in GPS-PW over the past 9 hours, the KIndex, and the electric field mill value. This earlier model was not optimized for any specific forecast interval, but showed promise for 6 hour and 1.5 hour forecasts. Two new models were developed and verified. These new models were optimized for two operationally significant forecast intervals. The first model was optimized for the 0.5 hour lightning advisories issued by the 45th Weather Squadron. An additional 1.5 hours was allowed for sensor dwell, communication, calculation, analysis, and advisory decision by the forecaster. Therefore the 0.5 hour advisory model became a 2 hour forecast model for lightning within the 45th Weather Squadron advisory areas. The second model was optimized for major ground processing operations supported by the 45th Weather Squadron, which can require lightning forecasts with a lead-time of up to 7.5 hours. Using the same 1.5 lag as in the other new model, this became a 9 hour forecast model for lightning within 37 km (20 NM)) of the 45th Weather Squadron advisory areas. The two new models were built using binary logistic regression from a list of 26 candidate predictor variables: the current GPS-PW value, the change of GPS-PW over 0.5 hour increments from 0.5 to 12 hours, and the K-index. The new 2 hour model found the following for predictors to be statistically significant, listed in decreasing order of contribution to the forecast: the 0.5 hour change in GPS-PW, the 7.5 hour change in GPS-PW, the current GPS-PW value, and the KIndex. The new 9 hour forecast model found the following five independent variables to be statistically significant, listed in decreasing order of contribution to the forecast: the current GPSPW value, the 8.5 hour change in GPS-PW, the 3.5 hour change in GPS-PW, the 12 hour change in GPS-PW, and the K-Index. In both models, the GPS-PW parameters had better correlation to the lightning forecast than the K-Index, a widely used thunderstorm index. Possible future improvements to this study are discussed.
Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032
Nowcasting sunshine number using logistic modeling
NASA Astrophysics Data System (ADS)
Brabec, Marek; Badescu, Viorel; Paulescu, Marius
2013-04-01
In this paper, we present a formalized approach to statistical modeling of the sunshine number, binary indicator of whether the Sun is covered by clouds introduced previously by Badescu (Theor Appl Climatol 72:127-136, 2002). Our statistical approach is based on Markov chain and logistic regression and yields fully specified probability models that are relatively easily identified (and their unknown parameters estimated) from a set of empirical data (observed sunshine number and sunshine stability number series). We discuss general structure of the model and its advantages, demonstrate its performance on real data and compare its results to classical ARIMA approach as to a competitor. Since the model parameters have clear interpretation, we also illustrate how, e.g., their inter-seasonal stability can be tested. We conclude with an outlook to future developments oriented to construction of models allowing for practically desirable smooth transition between data observed with different frequencies and with a short discussion of technical problems that such a goal brings.
R144: a very massive binary likely ejected from R136 through a binary-binary encounter
NASA Astrophysics Data System (ADS)
Oh, Seungkyung; Kroupa, Pavel; Banerjee, Sambaran
2014-02-01
R144 is a recently confirmed very massive, spectroscopic binary which appears isolated from the core of the massive young star cluster R136. The dynamical ejection hypothesis as an origin for its location is claimed improbable by Sana et al. due to its binary nature and high mass. We demonstrate here by means of direct N-body calculations that a very massive binary system can be readily dynamically ejected from an R136-like cluster, through a close encounter with a very massive system. One out of four N-body cluster models produces a dynamically ejected very massive binary system with a mass comparable to R144. The system has a system mass of ≈355 M⊙ and is located at 36.8 pc from the centre of its parent cluster, moving away from the cluster with a velocity of 57 km s-1 at 2 Myr as a result of a binary-binary interaction. This implies that R144 could have been ejected from R136 through a strong encounter with another massive binary or single star. In addition, we discuss all massive binaries and single stars which are ejected dynamically from their parent cluster in the N-body models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaulme, P.; McKeever, J.; Rawls, M. L.
2013-04-10
Red giant stars are proving to be an incredible source of information for testing models of stellar evolution, as asteroseismology has opened up a window into their interiors. Such insights are a direct result of the unprecedented data from space missions CoRoT and Kepler as well as recent theoretical advances. Eclipsing binaries are also fundamental astrophysical objects, and when coupled with asteroseismology, binaries provide two independent methods to obtain masses and radii and exciting opportunities to develop highly constrained stellar models. The possibility of discovering pulsating red giants in eclipsing binary systems is therefore an important goal that could potentiallymore » offer very robust characterization of these systems. Until recently, only one case has been discovered with Kepler. We cross-correlate the detected red giant and eclipsing-binary catalogs from Kepler data to find possible candidate systems. Light-curve modeling and mean properties measured from asteroseismology are combined to yield specific measurements of periods, masses, radii, temperatures, eclipse timing variations, core rotation rates, and red giant evolutionary state. After using three different techniques to eliminate false positives, out of the 70 systems common to the red giant and eclipsing-binary catalogs we find 13 strong candidates (12 previously unknown) to be eclipsing binaries, one to be a non-eclipsing binary with tidally induced oscillations, and 10 more to be hierarchical triple systems, all of which include a pulsating red giant. The systems span a range of orbital eccentricities, periods, and spectral types F, G, K, and M for the companion of the red giant. One case even suggests an eclipsing binary composed of two red giant stars and another of a red giant with a {delta}-Scuti star. The discovery of multiple pulsating red giants in eclipsing binaries provides an exciting test bed for precise astrophysical modeling, and follow-up spectroscopic observations of many of the candidate systems are encouraged. The resulting highly constrained stellar parameters will allow, for example, the exploration of how binary tidal interactions affect pulsations when compared to the single-star case.« less
Simulations of binary black hole mergers
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey
2017-01-01
Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.
Spectral analysis of white ash response to emerald ash borer infestations
NASA Astrophysics Data System (ADS)
Calandra, Laura
The emerald ash borer (EAB) (Agrilus planipennis Fairmaire) is an invasive insect that has killed over 50 million ash trees in the US. The goal of this research was to establish a method to identify ash trees infested with EAB using remote sensing techniques at the leaf-level and tree crown level. First, a field-based study at the leaf-level used the range of spectral bands from the WorldView-2 sensor to determine if there was a significant difference between EAB-infested white ash (Fraxinus americana) and healthy leaves. Binary logistic regression models were developed using individual and combinations of wavelengths; the most successful model included 545 and 950 nm bands. The second half of this research employed imagery to identify healthy and EAB-infested trees, comparing pixel- and object-based methods by applying an unsupervised classification approach and a tree crown delineation algorithm, respectively. The pixel-based models attained the highest overall accuracies.
Interfacing modeling suite Physics Of Eclipsing Binaries 2.0 with a Virtual Reality Platform
NASA Astrophysics Data System (ADS)
Harriett, Edward; Conroy, Kyle; Prša, Andrej; Klassner, Frank
2018-01-01
To explore alternate methods for modeling eclipsing binary stars, we extrapolate upon PHOEBE’s (PHysics Of Eclipsing BinariEs) capabilities in a virtual reality (VR) environment to create an immersive and interactive experience for users. The application used is Vizard, a python-scripted VR development platform for environments such as Cave Automatic Virtual Environment (CAVE) and other off-the-shelf VR headsets. Vizard allows the freedom for all modeling to be precompiled without compromising functionality or usage on its part. The system requires five arguments to be precomputed using PHOEBE’s python front-end: the effective temperature, flux, relative intensity, vertex coordinates, and orbits; the user can opt to implement other features from PHOEBE to be accessed within the simulation as well. Here we present the method for making the data observables accessible in real time. An Occulus Rift will be available for a live showcase of various cases of VR rendering of PHOEBE binary systems including detached and contact binary stars.
Analysis of Predominance of Sexual Reproduction and Quadruplicity of Bases by Computer Simulation
NASA Astrophysics Data System (ADS)
Dasgupta, Subinay
We have presented elsewhere a model for computer simulation of a colony of individuals reproducing sexually, by meiotic parthenogenesis and by cloning. Our algorithm takes into account food and space restriction, and attacks of some diseases. Each individual is characterized by a string of L ``base'' units, each of which can be of four types (quaternary model) or two types (binary model). Our previous report was for the case of L=12 (quaternary model) and L=24 (binary model) and contained the result that the fluctuation of population was the lowest for sexual reproduction with four types of base units. The present communication reports that the same conclusion also holds for L=10 (quaternary model) and L=20 (binary model), and for L=8 (quaternary model) and L=16 (binary model). This model however, suffers from the drawback that it does not show the effect of aging. A modification of the model was attempted to remove this drawback, but the results were not encouraging.
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos
2014-05-01
Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested (Atkinson et al. 2003). The developed statistical model is applied to the Koiliaris River Basin in the island of Crete, Greece. The aim is to determine the probability of erosion along the Koiliaris' riverbanks considering a series of independent geomorphological and/or hydrological variables. Data for the river bank slope and for the river cross section width are available at ten locations along the river. The riverbank has indications of erosion at six of the ten locations while four has remained stable. Based on a recent work, measurements for the two independent variables and data regarding bank stability are available at eight different locations along the river. These locations were used as validation points for the proposed statistical model. The results show a very close agreement between the observed erosion indications and the statistical model as the probability of erosion was accurately predicted at seven out of the eight locations. The next step is to apply the model at more locations along the riverbanks. In November 2013, stakes were inserted at selected locations in order to be able to identify the presence or absence of erosion after the winter period. In April 2014 the presence or absence of erosion will be identified and the model results will be compared to the field data. Our intent is to extend the model by increasing the number of independent variables in order to indentify the key factors favouring erosion along the Koiliaris River. We aim at developing an easy to use statistical tool that will provide a quantified measure of the erosion probability along the riverbanks, which could consequently be used to prevent erosion and flooding events. Atkinson, P. M., German, S. E., Sear, D. A. and Clark, M. J. 2003. Exploring the relations between riverbank erosion and geomorphological controls using geographically weighted logistic regression. Geographical Analysis, 35 (1), 58-82. Luppi, L., Rinaldi, M., Teruggi, L. B., Darby, S. E. and Nardi, L. 2009. Monitoring and numerical modelling of riverbank erosion processes: A case study along the Cecina River (central Italy). Earth Surface Processes and Landforms, 34 (4), 530-546. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
Louveau, Baptiste; De Rycke, Yann; Lafourcade, Alexandre; Saraux, Alain; Guillemin, Francis; Tubach, Florence; Fautrel, Bruno; Hajage, David
2018-05-22
Several authors have tried to predict the risk of radiographic progression in RA according to baseline characteristics, considering exposure to treatment only as a binary variable (Treated: Yes/No). This study aims to model the risk of 5-year radiographic progression taking into account both baseline characteristics and the cumulative time-varying exposure to corticosteroids or DMARDs. The study population consisted of 403 patients of the Etude et Suivi des Polyarthrites Indifférenciées Récentes cohort meeting the 1987 ACR or 2010 ACR/EULAR criteria for RA at inclusion and having complete radiographic data at baseline and 5 years. Radiographic progression was defined at 5 years as a significant increase of the Sharp/van der Heidje score (smallest detectable difference ⩾5). The best logistic regression model was selected from the following: model including only clinico-biological baseline characteristics; model considering baseline characteristics and treatments as binary variables; and model considering baseline characteristics and treatments as weighted cumulative exposure variables. Radiographic progression occurred in 143 (35.5%) patients. The best model combined anti-citrullinated peptide antibody positivity, ESR, swollen joint count >14 and erosion score at baseline, as well as corticosteroids, MTX/LEF (MTX or LEF) and biologic DMARDs (bDMARDs) as weighted cumulative exposure variables. Recent cumulative exposure to high doses of corticosteroids (⩽ 3months) was significantly associated with the risk of 5-year radiographic progression and a significant protective association was highlighted for a 36-month exposure to bDMARDs. Corticosteroids and bDMARDs play an important role in radiographic progression. Accounting for treatment class and intensity of exposure is a major concern in predictive models of radiographic progression in RA patients.
Black Hole Mergers in Galactic Nuclei Induced by the Eccentric Kozai–Lidov Effect
NASA Astrophysics Data System (ADS)
Hoang, Bao-Minh; Naoz, Smadar; Kocsis, Bence; Rasio, Frederic A.; Dosopoulou, Fani
2018-04-01
Nuclear star clusters around a central massive black hole (MBH) are expected to be abundant in stellar black hole (BH) remnants and BH–BH binaries. These binaries form a hierarchical triple system with the central MBH, and gravitational perturbations from the MBH can cause high-eccentricity excitation in the BH–BH binary orbit. During this process, the eccentricity may approach unity, and the pericenter distance may become sufficiently small so that gravitational-wave emission drives the BH–BH binary to merge. In this work, we construct a simple proof-of-concept model for this process, and specifically, we study the eccentric Kozai–Lidov mechanism in unequal-mass, soft BH–BH binaries. Our model is based on a set of Monte Carlo simulations for BH–BH binaries in galactic nuclei, taking into account quadrupole- and octupole-level secular perturbations, general relativistic precession, and gravitational-wave emission. For a typical steady-state number of BH–BH binaries, our model predicts a total merger rate of ∼1–3 {Gpc} ‑3 {yr} ‑1, depending on the assumed density profile in the nucleus. Thus, our mechanism could potentially compete with other dynamical formation processes for merging BH–BH binaries, such as the interactions of stellar BHs in globular clusters or in nuclear star clusters without an MBH.
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Photometric Study of the Eclipsing Binary Star PY Boötis
NASA Astrophysics Data System (ADS)
Michaels, E. J.
2016-12-01
Presented here are the first precision multi-band CCD photometry of the eclipsing binary star PY Boötis. Best-fit stellar models were determined by analyzing the light curves with the Wilson-Devinney program. Asymmetries in the light curves were interpreted as resulting from magnetic activity which required spots to be included in the model. The resulting model is consistent with a W-type contact eclipsing binary having total eclipses.
Embedded binaries and their dense cores
NASA Astrophysics Data System (ADS)
Sadavoy, Sarah I.; Stahler, Steven W.
2017-08-01
We explore the relationship between young, embedded binaries and their parent cores, using observations within the Perseus Molecular Cloud. We combine recently published Very Large Array observations of young stars with core properties obtained from Submillimetre Common-User Bolometer Array 2 observations at 850 μm. Most embedded binary systems are found towards the centres of their parent cores, although several systems have components closer to the core edge. Wide binaries, defined as those systems with physical separations greater than 500 au, show a tendency to be aligned with the long axes of their parent cores, whereas tight binaries show no preferred orientation. We test a number of simple, evolutionary models to account for the observed populations of Class 0 and I sources, both single and binary. In the model that best explains the observations, all stars form initially as wide binaries. These binaries either break up into separate stars or else shrink into tighter orbits. Under the assumption that both stars remain embedded following binary break-up, we find a total star formation rate of 168 Myr-1. Alternatively, one star may be ejected from the dense core due to binary break-up. This latter assumption results in a star formation rate of 247 Myr-1. Both production rates are in satisfactory agreement with current estimates from other studies of Perseus. Future observations should be able to distinguish between these two possibilities. If our model continues to provide a good fit to other star-forming regions, then the mass fraction of dense cores that becomes stars is double what is currently believed.
Efficient logistic regression designs under an imperfect population identifier.
Albert, Paul S; Liu, Aiyi; Nansel, Tonja
2014-03-01
Motivated by actual study designs, this article considers efficient logistic regression designs where the population is identified with a binary test that is subject to diagnostic error. We consider the case where the imperfect test is obtained on all participants, while the gold standard test is measured on a small chosen subsample. Under maximum-likelihood estimation, we evaluate the optimal design in terms of sample selection as well as verification. We show that there may be substantial efficiency gains by choosing a small percentage of individuals who test negative on the imperfect test for inclusion in the sample (e.g., verifying 90% test-positive cases). We also show that a two-stage design may be a good practical alternative to a fixed design in some situations. Under optimal and nearly optimal designs, we compare maximum-likelihood and semi-parametric efficient estimators under correct and misspecified models with simulations. The methodology is illustrated with an analysis from a diabetes behavioral intervention trial. © 2013, The International Biometric Society.
A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods
NASA Astrophysics Data System (ADS)
Jakubowski, Jacek
2014-12-01
The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.
Merisalu, Eda; Männik, Georg; Põlluste, Kaja
2014-01-01
The aim of the study was to explore the role of managerial style, work environment factors and burnout in determining job satisfaction during the implementation of quality improvement activities in a dental clinic. Quantitative research was carried out using a prestructured anonymous questionnaire to survey 302 respondents in Kaarli Dental Clinic, Estonia. Dental clinic staff assessed job satisfaction, managerial style, work stress and burnout levels through the implementation period of ISO 9000 quality management system in 2003 and annually during 2006-2009. Binary logistic regression was used to explain the impact of satisfaction with management and work organisation, knowledge about managerial activities, work environment and psychosocial stress and burnout on job satisfaction. The response rate limits were between 60% and 89.6%. Job satisfaction increased significantly from 2003 to 2006 and the percentage of very satisfied staff increased from 17 to 38 (p<0.01) over this period. In 2007, the proportion of very satisfied people dropped to 21% before increasing again in 2008-2009 (from 24% to 35%). Binary logistic regression analysis resulted in a model that included five groups of factors: managerial support, information about results achieved and progress to goals, work organisation and working environment, as well as factors related to career, security and planning. The average scores of emotional exhaustion showed significant decrease, correlating negatively with job satisfaction (p<0.05). The implementation of quality improvement activities in the Kaarli Dental Clinic has improved the work environment by decreasing burnout symptoms and increased job satisfaction in staff.
NASA Astrophysics Data System (ADS)
Thibodeau, Eric; Gheribi, Aimen E.; Jung, In-Ho
2016-04-01
A structural molar volume model was developed to accurately reproduce the molar volume of molten oxides. As the non-linearity of molar volume is related to the change in structure of molten oxides, the silicate tetrahedral Q-species, calculated from the modified quasichemical model with an optimized thermodynamic database, were used as basic structural units in the present model. Experimental molar volume data for unary and binary melts in the Li2O-Na2O-K2O-MgO-CaO-MnO-PbO-Al2O3-SiO2 system were critically evaluated. The molar volumes of unary oxide components and binary Q-species, which are model parameters of the present structural model, were determined to accurately reproduce the experimental data across the entire binary composition in a wide range of temperatures. The non-linear behavior of molar volume and thermal expansivity of binary melt depending on SiO2 content are well reproduced by the present model.
Photometric Analysis and Modeling of Five Mass-Transferring Binary Systems
NASA Astrophysics Data System (ADS)
Geist, Emily; Beaky, Matthew; Jamison, Kate
2018-01-01
In overcontact eclipsing binary systems, both stellar components have overfilled their Roche lobes, resulting in a dumbbell-shaped shared envelope. Mass transfer is common in overcontact binaries, which can be observed as a slow change on the rotation period of the system.We studied five overcontact eclipsing binary systems with evidence of period change, and thus likely mass transfer between the components, identified by Nelson (2014): V0579 Lyr, KN Vul, V0406 Lyr, V2240 Cyg, and MS Her. We used the 31-inch NURO telescope at Lowell Observatory in Flagstaff, Arizona to obtain images in B,V,R, and I filters for V0579 Lyr, and the 16-inch Meade LX200GPS telescope with attached SBIG ST-8XME CCD camera at Juniata College in Huntingdon, Pennsylvania to image KN Vul, V0406 Lyr, V2240 Cyg, and MS Her, also in B,V,R, and I.After data reduction, we created light curves for each of the systems and modeled the eclipsing binaries using the BinaryMaker3 and PHOEBE programs to determine their fundamental physical parameters for the first time. Complete light curves and preliminary models for each of these neglected eclipsing binary systems will be presented.
Classification of Stellar Orbits in Axisymmetric Galaxies
NASA Astrophysics Data System (ADS)
Li, Baile; Holley-Bockelmann, Kelly; Khan, Fazeel Mahmood
2015-09-01
It is known that two supermassive black holes (SMBHs) cannot merge in a spherical galaxy within a Hubble time; an emerging picture is that galaxy geometry, rotation, and large potential perturbations may usher the SMBH binary through the critical three-body scattering phase and ultimately drive the SMBH to coalesce. We explore the orbital content within an N-body model of a mildly flattened, non-rotating, SMBH-embedded elliptical galaxy. When used as the foundation for a study on the SMBH binary coalescence, the black holes bypassed the binary stalling often seen within spherical galaxies and merged on gigayear timescales. Using both frequency-mapping and angular momentum criteria, we identify a wealth of resonant orbits in the axisymmetric model, including saucers, that are absent from an otherwise identical spherical system and that can potentially interact with the binary. We quantified the set of orbits that could be scattered by the SMBH binary, and found that the axisymmetric model contained nearly six times the number of these potential loss cone orbits compared to our equivalent spherical model. In this flattened model, the mass of these orbits is more than three times that of the SMBH, which is consistent with what the SMBH binary needs to scatter to transition into the gravitational wave regime.
CLASSIFICATION OF STELLAR ORBITS IN AXISYMMETRIC GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Baile; Holley-Bockelmann, Kelly; Khan, Fazeel Mahmood, E-mail: baile.li@vanderbilt.edu, E-mail: k.holley@vanderbilt.edu, E-mail: khanfazeel.ist@gmail.com
2015-09-20
It is known that two supermassive black holes (SMBHs) cannot merge in a spherical galaxy within a Hubble time; an emerging picture is that galaxy geometry, rotation, and large potential perturbations may usher the SMBH binary through the critical three-body scattering phase and ultimately drive the SMBH to coalesce. We explore the orbital content within an N-body model of a mildly flattened, non-rotating, SMBH-embedded elliptical galaxy. When used as the foundation for a study on the SMBH binary coalescence, the black holes bypassed the binary stalling often seen within spherical galaxies and merged on gigayear timescales. Using both frequency-mapping andmore » angular momentum criteria, we identify a wealth of resonant orbits in the axisymmetric model, including saucers, that are absent from an otherwise identical spherical system and that can potentially interact with the binary. We quantified the set of orbits that could be scattered by the SMBH binary, and found that the axisymmetric model contained nearly six times the number of these potential loss cone orbits compared to our equivalent spherical model. In this flattened model, the mass of these orbits is more than three times that of the SMBH, which is consistent with what the SMBH binary needs to scatter to transition into the gravitational wave regime.« less
Congdon, Peter; Lloyd, Patsy
2011-02-01
To estimate Toxocara infection rates by age, gender and ethnicity for US counties using data from the National Health and Nutrition Examination Survey (NHANES). After initial analysis to account for missing data, a binary regression model is applied to obtain relative risks of Toxocara infection for 20,396 survey subjects. The regression incorporates interplay between demographic attributes (age, ethnicity and gender), family poverty and geographic context (region, metropolitan status). Prevalence estimates for counties are then made, distinguishing between subpopulations in poverty and not in poverty. Even after allowing for elevated infection risk associated with poverty, seropositivity is elevated among Black non-Hispanics and other ethnic groups. There are also distinct effects of region. When regression results are translated into county prevalence estimates, the main influences on variation in county rates are percentages of non-Hispanic Blacks and county poverty. For targeting prevention it is important to assess implications of national survey data for small area prevalence. Using data from NHANES, the study confirms that both individual level risk factors and geographic contextual factors affect chances of Toxocara infection.
Accuracy of inference on the physics of binary evolution from gravitational-wave observations
NASA Astrophysics Data System (ADS)
Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya
2018-04-01
The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.
Accuracy of inference on the physics of binary evolution from gravitational-wave observations
NASA Astrophysics Data System (ADS)
Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya
2018-07-01
The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.
An, Shengli; Zhang, Yanhong; Chen, Zheng
2012-12-01
To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D
2014-10-01
We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.
2014-01-01
Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051
Ghanem, Eman; Hopfer, Helene; Navarro, Andrea; Ritzer, Maxwell S; Mahmood, Lina; Fredell, Morgan; Cubley, Ashley; Bolen, Jessica; Fattah, Rabia; Teasdale, Katherine; Lieu, Linh; Chua, Tedmund; Marini, Federico; Heymann, Hildegarde; Anslyn, Eric V
2015-05-20
Differential sensing using synthetic receptors as mimics of the mammalian senses of taste and smell is a powerful approach for the analysis of complex mixtures. Herein, we report on the effectiveness of a cross-reactive, supramolecular, peptide-based sensing array in differentiating and predicting the composition of red wine blends. Fifteen blends of Cabernet Sauvignon, Merlot and Cabernet Franc, in addition to the mono varietals, were used in this investigation. Linear Discriminant Analysis (LDA) showed a clear differentiation of blends based on tannin concentration and composition where certain mono varietals like Cabernet Sauvignon seemed to contribute less to the overall characteristics of the blend. Partial Least Squares (PLS) Regression and cross validation were used to build a predictive model for the responses of the receptors to eleven binary blends and the three mono varietals. The optimized model was later used to predict the percentage of each mono varietal in an independent test set composted of four tri-blends with a 15% average error. A partial least square regression model using the mouth-feel and taste descriptive sensory attributes of the wine blends revealed a strong correlation of the receptors to perceived astringency, which is indicative of selective binding to polyphenols in wine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksu, Z.; Acikel, U.; Kutsal, T.
1999-02-01
Although the biosorption of single metal ions to various kinds of microorganisms has been extensively studied and the adsorption isotherms have been developed for only the single metal ion situation, very little attention has been given to the bioremoval and expression of adsorption isotherms of multimetal ions systems. In this study the simultaneous biosorption of copper(II) and chromium(VI) to Chlorella vulgaris from a binary metal mixture was studied and compared with the single metal ion situation in a batch stirred system. The effects of pH and single- and dual-metal ion concentrations on the equilibrium uptakes were investigated. In previous studiesmore » the optimum biosorption pH had been determined as 4.0 for copper(II) and as 2.0 for chromium(VI). Multimetal ion biosorption studies were performed at these two pH values. It was observed that the equilibrium uptakes of copper(II) or chromium(VI) ions were changed due to the biosorption pH and the presence of other metal ions. Adsorption isotherms were developed for both single- and dual-metal ions systems at these two pH values, and expressed by the mono- and multicomponent Langmuir and Freundlich adsorption models. Model parameters were estimated by nonlinear regression. It was seen that the adsorption equilibrium data fitted very well to the competitive Freundlich model in the concentration ranges studied.« less
Multiphase, multicomponent phase behavior prediction
NASA Astrophysics Data System (ADS)
Dadmohammadi, Younas
Accurate prediction of phase behavior of fluid mixtures in the chemical industry is essential for designing and operating a multitude of processes. Reliable generalized predictions of phase equilibrium properties, such as pressure, temperature, and phase compositions offer an attractive alternative to costly and time consuming experimental measurements. The main purpose of this work was to assess the efficacy of recently generalized activity coefficient models based on binary experimental data to (a) predict binary and ternary vapor-liquid equilibrium systems, and (b) characterize liquid-liquid equilibrium systems. These studies were completed using a diverse binary VLE database consisting of 916 binary and 86 ternary systems involving 140 compounds belonging to 31 chemical classes. Specifically the following tasks were undertaken: First, a comprehensive assessment of the two common approaches (gamma-phi (gamma-ϕ) and phi-phi (ϕ-ϕ)) used for determining the phase behavior of vapor-liquid equilibrium systems is presented. Both the representation and predictive capabilities of these two approaches were examined, as delineated form internal and external consistency tests of 916 binary systems. For the purpose, the universal quasi-chemical (UNIQUAC) model and the Peng-Robinson (PR) equation of state (EOS) were used in this assessment. Second, the efficacy of recently developed generalized UNIQUAC and the nonrandom two-liquid (NRTL) for predicting multicomponent VLE systems were investigated. Third, the abilities of recently modified NRTL model (mNRTL2 and mNRTL1) to characterize liquid-liquid equilibria (LLE) phase conditions and attributes, including phase stability, miscibility, and consolute point coordinates, were assessed. The results of this work indicate that the ϕ-ϕ approach represents the binary VLE systems considered within three times the error of the gamma-ϕ approach. A similar trend was observed for the for the generalized model predictions using quantitative structure-property parameter generalizations (QSPR). For ternary systems, where all three constituent binary systems were available, the NRTL-QSPR, UNIQUAC-QSPR, and UNIFAC-6 models produce comparable accuracy. For systems where at least one constituent binary is missing, the UNIFAC-6 model produces larger errors than the QSPR generalized models. In general, the LLE characterization results indicate the accuracy of the modified models in reproducing the findings of the original NRTL model.
Comparative decision models for anticipating shortage of food grain production in India
NASA Astrophysics Data System (ADS)
Chattopadhyay, Manojit; Mitra, Subrata Kumar
2018-01-01
This paper attempts to predict food shortages in advance from the analysis of rainfall during the monsoon months along with other inputs used for crop production, such as land used for cereal production, percentage of area covered under irrigation and fertiliser use. We used six binary classification data mining models viz., logistic regression, Multilayer Perceptron, kernel lab-Support Vector Machines, linear discriminant analysis, quadratic discriminant analysis and k-Nearest Neighbors Network, and found that linear discriminant analysis and kernel lab-Support Vector Machines are equally suitable for predicting per capita food shortage with 89.69 % accuracy in overall prediction and 92.06 % accuracy in predicting food shortage ( true negative rate). Advance information of food shortage can help policy makers to take remedial measures in order to prevent devastating consequences arising out of food non-availability.
School bullying and traumatic dental injuries in East London adolescents.
Agel, M; Marcenes, W; Stansfeld, S A; Bernabé, E
2014-12-01
To explore the association between school bullying and traumatic dental injuries (TDI) among 15-16-year-old school children from East London. Data from phase III of the Research with East London Adolescents Community Health Survey (RELACHS), a school-based prospective study of a representative sample of adolescents, were analysed. Adolescents provided information on demographic characteristics, socioeconomic measures and frequency of bullying in school through self-administered questionnaires and were clinically examined for overjet, lip coverage and TDI. The association between school bullying and TDI was assessed using binary logistic regression models. The prevalence of TDI was 17%, while lifetime and current prevalence of bullying was 32% and 11%, respectively. The prevalence of TDI increased with a growing frequency of bullying; from 16% among adolescents who had never been bullied at school, to 21% among those who were bullied in the past but not this school term, to 22% for those who were bullied this school term. However, this association was not statistically significant either in crude or adjusted regression models. There was no evidence of an association between frequency of school bullying and TDI in this sample of 15-16-year-old adolescents in East London.
NASA Astrophysics Data System (ADS)
Hegazy, Maha A.; Lotfy, Hayam M.; Rezk, Mamdouh R.; Omran, Yasmin Rostom
2015-04-01
Smart and novel spectrophotometric and chemometric methods have been developed and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and dexamethasone sodium phosphate (DSP) in presence of interfering substances without prior separation. The first method depends upon derivative subtraction coupled with constant multiplication. The second one is ratio difference method at optimum wavelengths which were selected after applying derivative transformation method via multiplying by a decoding spectrum in order to cancel the contribution of non labeled interfering substances. The third method relies on partial least squares with regression model updating. They are so simple that they do not require any preliminary separation steps. Accuracy, precision and linearity ranges of these methods were determined. Moreover, specificity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied for analysis of both drugs in their pharmaceutical formulation. The obtained results have been statistically compared to that of an official spectrophotometric method to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision.
Kudumija Slijepcevic, Marija; Jukic, Vlado; Novalic, Darko; Zarkovic-Palijan, Tija; Milosevic, Milan; Rosenzweig, Ivana
2014-04-01
To determine predictive risk factors for violent offending in patients with paranoid schizophrenia in Croatia. The cross-sectional study including male in-patients with paranoid schizophrenia with (N=104) and without (N=102) history of physical violence and violent offending was conducted simultaneously in several hospitals in Croatia during one-year period (2010-2011). Data on their sociodemographic characteristics, duration of untreated illness phase (DUP), alcohol abuse, suicidal behavior, personality features, and insight into illness were collected and compared between groups. Binary logistic regression model was used to determine the predictors of violent offending. Predictors of violent offending were older age, DUP before first contact with psychiatric services, and alcohol abuse. Regression model showed that the strongest positive predictive factor was harmful alcohol use, as determined by AUDIT test (odds ratio 37.01; 95% confidence interval 5.20-263.24). Psychopathy, emotional stability, and conscientiousness were significant positive predictive factors, while extroversion, pleasantness, and intellect were significant negative predictive factors for violent offending. This study found an association between alcohol abuse and the risk for violent offending in paranoid schizophrenia. We hope that this finding will help improve public and mental health prevention strategies in this vulnerable patient group.
Gossett, Dana R; Deibel, Philip; Lewicky-Gaupp, Christina
2016-02-01
To estimate the relationship between a passive second stage of labor and obstetric anal sphincter injuries (OASIS). A retrospective, case-control study was undertaken of women who delivered at a tertiary-care center in Chicago, IL, USA, between November 2005 and December 2012. Cases had sustained OASIS and were matched on the basis of parity with controls who had no OASIS. Data were obtained from an electronic repository and chart review. Participants with a passive second stage of labor lasting 60 minutes or more were deemed to have "labored down." A logistic regression model to predict OASIS was created. Overall, 1629 cases were compared with 1312 controls. OASIS were recorded among 1452 (57.8%) of 2510 women who did not labor down compared with 169 (40.0%) of 423 women who labored down (P<0.001). However, in binary logistic regression, the addition of laboring down to the model only increased the predictive accuracy from 80.1% to 80.7%. When known risk factors for OASIS are accounted for, the effect of laboring down on perineal outcome is negligible. Copyright © 2015 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
Logistic Regression and Path Analysis Method to Analyze Factors influencing Students’ Achievement
NASA Astrophysics Data System (ADS)
Noeryanti, N.; Suryowati, K.; Setyawan, Y.; Aulia, R. R.
2018-04-01
Students' academic achievement cannot be separated from the influence of two factors namely internal and external factors. The first factors of the student (internal factors) consist of intelligence (X1), health (X2), interest (X3), and motivation of students (X4). The external factors consist of family environment (X5), school environment (X6), and society environment (X7). The objects of this research are eighth grade students of the school year 2016/2017 at SMPN 1 Jiwan Madiun sampled by using simple random sampling. Primary data are obtained by distributing questionnaires. The method used in this study is binary logistic regression analysis that aims to identify internal and external factors that affect student’s achievement and how the trends of them. Path Analysis was used to determine the factors that influence directly, indirectly or totally on student’s achievement. Based on the results of binary logistic regression, variables that affect student’s achievement are interest and motivation. And based on the results obtained by path analysis, factors that have a direct impact on student’s achievement are students’ interest (59%) and students’ motivation (27%). While the factors that have indirect influences on students’ achievement, are family environment (97%) and school environment (37).
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Prediction of cold and heat patterns using anthropometric measures based on machine learning.
Lee, Bum Ju; Lee, Jae Chul; Nam, Jiho; Kim, Jong Yeol
2018-01-01
To examine the association of body shape with cold and heat patterns, to determine which anthropometric measure is the best indicator for discriminating between the two patterns, and to investigate whether using a combination of measures can improve the predictive power to diagnose these patterns. Based on a total of 4,859 subjects (3,000 women and 1,859 men), statistical analyses using binary logistic regression were performed to assess the significance of the difference and the predictive power of each anthropometric measure, and binary logistic regression and Naive Bayes with the variable selection technique were used to assess the improvement in the predictive power of the patterns using the combined measures. In women, the strongest indicators for determining the cold and heat patterns among anthropometric measures were body mass index (BMI) and rib circumference; in men, the best indicator was BMI. In experiments using a combination of measures, the values of the area under the receiver operating characteristic curve in women were 0.776 by Naive Bayes and 0.772 by logistic regression, and the values in men were 0.788 by Naive Bayes and 0.779 by logistic regression. Individuals with a higher BMI have a tendency toward a heat pattern in both women and men. The use of a combination of anthropometric measures can slightly improve the diagnostic accuracy. Our findings can provide fundamental information for the diagnosis of cold and heat patterns based on body shape for personalized medicine.
Equilibrium, stability, and orbital evolution of close binary systems
NASA Technical Reports Server (NTRS)
Lai, Dong; Rasio, Frederic A.; Shapiro, Stuart L.
1994-01-01
We present a new analytic study of the equilibrium and stability properties of close binary systems containing polytropic components. Our method is based on the use of ellipsoidal trial functions in an energy variational principle. We consider both synchronized and nonsynchronized systems, constructing the compressible generalizations of the classical Darwin and Darwin-Riemann configurations. Our method can be applied to a wide variety of binary models where the stellar masses, radii, spins, entropies, and polytropic indices are all allowed to vary over wide ranges and independently for each component. We find that both secular and dynamical instabilities can develop before a Roche limit or contact is reached along a sequence of models with decreasing binary separation. High incompressibility always makes a given binary system more susceptible to these instabilities, but the dependence on the mass ratio is more complicated. As simple applications, we construct models of double degenerate systems and of low-mass main-sequence star binaries. We also discuss the orbital evoltuion of close binary systems under the combined influence of fluid viscosity and secular angular momentum losses from processes like gravitational radiation. We show that the existence of global fluid instabilities can have a profound effect on the terminal evolution of coalescing binaries. The validity of our analytic solutions is examined by means of detailed comparisons with the results of recent numerical fluid calculations in three dimensions.
Castada, Hardy Z; Wick, Cheryl; Harper, W James; Barringer, Sheryl
2015-01-15
Twelve volatile organic compounds (VOCs) have recently been identified as key compounds in Swiss cheese with split defects. It is important to know how these VOCs interact in binary mixtures and if their behavior changes with concentration in binary mixtures. Selected ion flow tube mass spectrometry (SIFT-MS) was used for the headspace analysis of VOCs commonly found in Swiss cheeses. Headspace (H/S) sampling and quantification checks using SIFT-MS and further linear regression analyses were carried out on twelve selected aqueous solutions of VOCs. Five binary mixtures of standard solutions of VOCs were also prepared and the H/S profile of each mixture was analyzed. A very good fit of linearity for the twelve VOCs (95% confidence level) confirms direct proportionality between the H/S and the aqueous concentration of the standard solutions. Henry's Law coefficients were calculated with a high degree of confidence. SIFT-MS analysis of five binary mixtures showed that the more polar compounds reduced the H/S concentration of the less polar compounds, while the addition of a less polar compound increased the H/S concentration of the more polar compound. In the binary experiment, it was shown that the behavior of a compound in the headspace can be significantly affected by the presence of another compound. Thus, the matrix effect plays a significant role in the behavior of molecules in a mixed solution. Copyright © 2014 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Jung, Youngoh; Schaller, James; Bellini, James
2010-01-01
In this study, the authors investigated the effects of demographic, medical, and vocational rehabilitation service variables on employment outcomes of persons living with HIV/AIDS. Binary logistic regression analyses were conducted to determine predictors of employment outcomes using two groups drawn from Rehabilitation Services Administration…
Dynamics of Volunteering in Older Europeans
ERIC Educational Resources Information Center
Hank, Karsten; Erlinghagen, Marcel
2010-01-01
Purpose: To investigate the dynamics of volunteering in the population aged 50 years or older across 11 Continental European countries. Design and Methods: Using longitudinal data from the first 2 waves of the Survey of Health, Ageing and Retirement in Europe, we run multivariate regressions on a set of binary-dependent variables indicating…
Who Benefits from Tuition Discounts at Public Universities?
ERIC Educational Resources Information Center
Hillman, Nicholas W.
2010-01-01
This article uses data from the 2004 National Postsecondary Student Aid Study to provide insight about the range of tuition discounting practices at public institutions. Specifically, it examines the characteristics of students who receive tuition discounts from public four-year colleges and universities. A binary logistic regression is applied to…
Graduate Unemployment in South Africa: Social Inequality Reproduced
ERIC Educational Resources Information Center
Baldry, Kim
2016-01-01
In this study, I examine the influence of demographic and educational characteristics of South African graduates on their employment/unemployment status. A sample of 1175 respondents who graduated between 2006 and 2012 completed an online survey. Using binary logistic regression, the strongest determinants of unemployment were the graduates' race,…
Commitment of Licensed Social Workers to Aging Practice
ERIC Educational Resources Information Center
Simons, Kelsey; Bonifas, Robin; Gammonley, Denise
2011-01-01
This study sought to identify client, professional, and employment characteristics that enhance licensed social workers' commitment to aging practice. A series of binary logistic regressions were performed using data from 181 licensed, full-time social workers who reported aging as their primary specialty area as part of the 2004 NASW's national…
First photometric study of two southern eclipsing binaries IS Tel and DW Aps
NASA Astrophysics Data System (ADS)
Özer, S.; Sürgit, D.; Erdem, A.; Öztürk, O.
2017-02-01
The paper presents the first photometric analysis of two southern eclipsing binary stars, IS Tel and DW Aps. Their V light curves from the All Sky Automated Survey were modelled by using Wilson-Devinney method. The final models give these two Algol-like binary stars as having detached configurations. Absolute parameters of the components of the systems were also estimated.
NASA Astrophysics Data System (ADS)
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Hunting for brown dwarf binaries with X-Shooter
NASA Astrophysics Data System (ADS)
Manjavacas, E.; Goldman, B.; Alcalá, J. M.; Zapatero-Osorio, M. R.; Béjar, B. J. S.; Homeier, D.; Bonnefoy, M.; Smart, R. L.; Henning, T.; Allard, F.
2015-05-01
The refinement of the brown dwarf binary fraction may contribute to the understanding of the substellar formation mechanisms. Peculiar brown dwarf spectra or discrepancy between optical and near-infrared spectral type classification of brown dwarfs may indicate unresolved brown dwarf binary systems. We obtained medium-resolution spectra of 22 brown dwarfs of potential binary candidates using X-Shooter at the VLT. We aimed to select brown dwarf binary candidates. We also tested whether BT-Settl 2014 atmospheric models reproduce the physics in the atmospheres of these objects. To find different spectral type spectral binaries, we used spectral indices and we compared the selected candidates to single spectra and composition of two single spectra from libraries, to try to reproduce our X-Shooter spectra. We also created artificial binaries within the same spectral class, and we tried to find them using the same method as for brown dwarf binaries with different spectral types. We compared our spectra to the BT-Settl models 2014. We selected six possible candidates to be combination of L plus T brown dwarfs. All candidates, except one, are better reproduced by a combination of two single brown dwarf spectra than by a single spectrum. The one-sided F-test discarded this object as a binary candidate. We found that we are not able to find the artificial binaries with components of the same spectral type using the same method used for L plus T brown dwarfs. Best matches to models gave a range of effective temperatures between 950 K and 1900 K, a range of gravities between 4.0 and 5.5. Some best matches corresponded to supersolar metallicity.
Observational properties of massive black hole binary progenitors
NASA Astrophysics Data System (ADS)
Hainich, R.; Oskinova, L. M.; Shenar, T.; Marchant, P.; Eldridge, J. J.; Sander, A. A. C.; Hamann, W.-R.; Langer, N.; Todt, H.
2018-01-01
Context. The first directly detected gravitational waves (GW 150914) were emitted by two coalescing black holes (BHs) with masses of ≈ 36 M⊙ and ≈ 29 M⊙. Several scenarios have been proposed to put this detection into an astrophysical context. The evolution of an isolated massive binary system is among commonly considered models. Aims: Various groups have performed detailed binary-evolution calculations that lead to BH merger events. However, the question remains open as to whether binary systems with the predicted properties really exist. The aim of this paper is to help observers to close this gap by providing spectral characteristics of massive binary BH progenitors during a phase where at least one of the companions is still non-degenerate. Methods: Stellar evolution models predict fundamental stellar parameters. Using these as input for our stellar atmosphere code (Potsdam Wolf-Rayet), we compute a set of models for selected evolutionary stages of massive merging BH progenitors at different metallicities. Results: The synthetic spectra obtained from our atmosphere calculations reveal that progenitors of massive BH merger events start their lives as O2-3V stars that evolve to early-type blue supergiants before they undergo core-collapse during the Wolf-Rayet phase. When the primary has collapsed, the remaining system will appear as a wind-fed high-mass X-ray binary. Based on our atmosphere models, we provide feedback parameters, broad band magnitudes, and spectral templates that should help to identify such binaries in the future. Conclusions: While the predicted parameter space for massive BH binary progenitors is partly realized in nature, none of the known massive binaries match our synthetic spectra of massive BH binary progenitors exactly. Comparisons of empirically determined mass-loss rates with those assumed by evolution calculations reveal significant differences. The consideration of the empirical mass-loss rates in evolution calculations will possibly entail a shift of the maximum in the predicted binary-BH merger rate to higher metallicities, that is, more candidates should be expected in our cosmic neighborhood than previously assumed.
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
NASA Astrophysics Data System (ADS)
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.
Karim, Ahmad; Salleh, Rosli; Khan, Muhammad Khurram
2016-01-01
Botnet phenomenon in smartphones is evolving with the proliferation in mobile phone technologies after leaving imperative impact on personal computers. It refers to the network of computers, laptops, mobile devices or tablets which is remotely controlled by the cybercriminals to initiate various distributed coordinated attacks including spam emails, ad-click fraud, Bitcoin mining, Distributed Denial of Service (DDoS), disseminating other malwares and much more. Likewise traditional PC based botnet, Mobile botnets have the same operational impact except the target audience is particular to smartphone users. Therefore, it is import to uncover this security issue prior to its widespread adaptation. We propose SMARTbot, a novel dynamic analysis framework augmented with machine learning techniques to automatically detect botnet binaries from malicious corpus. SMARTbot is a component based off-device behavioral analysis framework which can generate mobile botnet learning model by inducing Artificial Neural Networks’ back-propagation method. Moreover, this framework can detect mobile botnet binaries with remarkable accuracy even in case of obfuscated program code. The results conclude that, a classifier model based on simple logistic regression outperform other machine learning classifier for botnet apps’ detection, i.e 99.49% accuracy is achieved. Further, from manual inspection of botnet dataset we have extracted interesting trends in those applications. As an outcome of this research, a mobile botnet dataset is devised which will become the benchmark for future studies. PMID:26978523
Karim, Ahmad; Salleh, Rosli; Khan, Muhammad Khurram
2016-01-01
Botnet phenomenon in smartphones is evolving with the proliferation in mobile phone technologies after leaving imperative impact on personal computers. It refers to the network of computers, laptops, mobile devices or tablets which is remotely controlled by the cybercriminals to initiate various distributed coordinated attacks including spam emails, ad-click fraud, Bitcoin mining, Distributed Denial of Service (DDoS), disseminating other malwares and much more. Likewise traditional PC based botnet, Mobile botnets have the same operational impact except the target audience is particular to smartphone users. Therefore, it is import to uncover this security issue prior to its widespread adaptation. We propose SMARTbot, a novel dynamic analysis framework augmented with machine learning techniques to automatically detect botnet binaries from malicious corpus. SMARTbot is a component based off-device behavioral analysis framework which can generate mobile botnet learning model by inducing Artificial Neural Networks' back-propagation method. Moreover, this framework can detect mobile botnet binaries with remarkable accuracy even in case of obfuscated program code. The results conclude that, a classifier model based on simple logistic regression outperform other machine learning classifier for botnet apps' detection, i.e 99.49% accuracy is achieved. Further, from manual inspection of botnet dataset we have extracted interesting trends in those applications. As an outcome of this research, a mobile botnet dataset is devised which will become the benchmark for future studies.
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
Krajbich, Ian; Rangel, Antonio
2011-08-16
How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benvenuto, O. G.; De Vito, M. A.; Horvath, J. E., E-mail: adevito@fcaglp.unlp.edu.ar, E-mail: foton@iag.usp.br
We study the evolution of close binary systems formed by a normal (solar composition), intermediate-mass-donor star together with a neutron star. We consider models including irradiation feedback and evaporation. These nonstandard ingredients deeply modify the mass-transfer stages of these binaries. While models that neglect irradiation feedback undergo continuous, long-standing mass-transfer episodes, models including these effects suffer a number of cycles of mass transfer and detachment. During mass transfer, the systems should reveal themselves as low-mass X-ray binaries (LMXBs), whereas when they are detached they behave as binary radio pulsars. We show that at these stages irradiated models are in amore » Roche lobe overflow (RLOF) state or in a quasi-RLOF state. Quasi-RLOF stars have radii slightly smaller than their Roche lobes. Remarkably, these conditions are attained for an orbital period as well as donor mass values in the range corresponding to a family of binary radio pulsars known as ''redbacks''. Thus, redback companions should be quasi-RLOF stars. We show that the characteristics of the redback system PSR J1723-2837 are accounted for by these models. In each mass-transfer cycle these systems should switch from LMXB to binary radio pulsar states with a timescale of approximately one million years. However, there is recent and fast growing evidence of systems switching on far shorter, human timescales. This should be related to instabilities in the accretion disk surrounding the neutron star and/or radio ejection, still to be included in the model having the quasi-RLOF state as a general condition.« less
Juang, S-E; Huang, C-E; Chen, C-L; Wang, C-H; Huang, C-J; Cheng, K-W; Wu, S-C; Shih, T-H; Yang, S-C; Wong, Z-W; Jawan, B; Lee, Y-E
2016-05-01
Hyperkalemia, defined as a serum potassium level higher than 5 mEq/L, is common in the liver transplantation setting. Severe hyperkalemia may induce fatal cardiac arrhythmias; therefore, it should be monitored and treated accordingly. The aim of the current retrospective study is to evaluate and indentify the predictive risk factors of hyperkalemia during living-donor liver transplantation (LDLT). Four hundred eighty-seven adult LDLT patients were included in the study. Intraoperative serum potassium levels were monitored at least five times during LDLT; patients with a potassium level higher than 5 mEq/L were included in group 1, and the others with normokalemia in group 2. Patients' categorical characteristics and intraoperative numeric variables with a P value <.1 were selected into a multiple binary logistic regression model. In multivariate analysis, a P value of <.05 is regarded as a risk factor in the development of hyperkalemia. Fifty-one of 487 (10.4%) patients had hyperkalemia with a serum potassium level higher than 5.0 mEq/L during LDLT. Predictive factors with P < .1 in univariate analysis (Table 1), such as anesthesia time, preoperative albumin level, Model for End-stage Liver Disease score, preoperative bilirubin level, amount of blood loss, red blood cell (RBC) and fresh frozen plasma transfused, 5% albumin administered, hemoglobin at the end of surgery, and the amount of furosemide used, were further analyzed by multivariate binary regression. Results show that the anesthesia time, preoperative serum albumin level, and RBC count are determinant risk factors in the development of the hyperkalemia in our LDLT serials. Prolonged anesthesia time, preoperative serum albumin level, and intraoperative RBC transfusion are three determinant factors in the development of intraoperative hyperkalemia, and close monitoring of serum potassium levels in patients with abovementioned risk factors are recommended. Copyright © 2016 Elsevier Inc. All rights reserved.
Cazzola, Mario; Calzetta, Luigino; Matera, Maria Gabriella; Muscoli, Saverio; Rogliani, Paola; Romeo, Francesco
2015-08-01
Chronic obstructive pulmonary disease (COPD) is often associated with cardiovascular artery disease (CAD), representing a potential and independent risk factor for cardiovascular morbidity. Therefore, the aim of this study was to identify an algorithm for predicting the risk of CAD in COPD patients. We analyzed data of patients afferent to the Cardiology ward and the Respiratory Diseases outpatient clinic of Tor Vergata University (2010-2012, 1596 records). The study population was clustered as training population (COPD patients undergoing coronary arteriography), control population (non-COPD patients undergoing coronary arteriography), test population (COPD patients whose records reported information on the coronary status). The predicting model was built via causal relationship between variables, stepwise binary logistic regression and Hosmer-Lemeshow analysis. The algorithm was validated via split-sample validation method and receiver operating characteristics (ROC) curve analysis. The diagnostic accuracy was assessed. In training population the variables gender (men/women OR: 1.7, 95%CI: 1.237-2.5, P < 0.05), dyslipidemia (OR: 1.8, 95%CI: 1.2-2.5, P < 0.01) and smoking habit (OR: 1.5, 95%CI: 1.2-1.9, P < 0.001) were significantly associated with CAD in COPD patients, whereas in control population also age and diabetes were correlated. The stepwise binary logistic regressions permitted to build a well fitting predictive model for training population but not for control population. The predictive algorithm shown a diagnostic accuracy of 81.5% (95%CI: 77.78-84.71) and an AUC of 0.81 (95%CI: 0.78-0.85) for the validation set. The proposed algorithm is effective for predicting the risk of CAD in COPD patients via a rapid, inexpensive and non-invasive approach. Copyright © 2015 Elsevier Ltd. All rights reserved.
Choudhary, Pushpa; Velaga, Nagendra R
2017-09-01
This study analysed and modelled the effects of conversation and texting (each with two difficulty levels) on driving performance of Indian drivers in terms of their mean speed and accident avoiding abilities; and further explored the relationship between speed reduction strategy of the drivers and their corresponding accident frequency. 100 drivers of three different age groups (young, mid-age and old-age) participated in the simulator study. Two sudden events of Indian context: unexpected crossing of pedestrians and joining of parked vehicles from road side, were simulated for estimating the accident probabilities. Generalized linear mixed models approach was used for developing linear regression models for mean speed and binary logistic regression models for accident probability. The results of the models showed that the drivers significantly compensated the increased workload by reducing their mean speed by 2.62m/s and 5.29m/s in the presence of conversation and texting tasks respectively. The logistic models for accident probabilities showed that the accident probabilities increased by 3 and 4 times respectively when the drivers were conversing or texting on a phone during driving. Further, the relationship between the speed reduction patterns and their corresponding accident frequencies showed that all the drivers compensated differently; but, among all the drivers, only few drivers, who compensated by reducing the speed by 30% or more, were able to fully offset the increased accident risk associated with the phone use. Copyright © 2017 Elsevier Ltd. All rights reserved.
Discovery and characterization of 3000+ main-sequence binaries from APOGEE spectra
NASA Astrophysics Data System (ADS)
El-Badry, Kareem; Ting, Yuan-Sen; Rix, Hans-Walter; Quataert, Eliot; Weisz, Daniel R.; Cargile, Phillip; Conroy, Charlie; Hogg, David W.; Bergemann, Maria; Liu, Chao
2018-05-01
We develop a data-driven spectral model for identifying and characterizing spatially unresolved multiple-star systems and apply it to APOGEE DR13 spectra of main-sequence stars. Binaries and triples are identified as targets whose spectra can be significantly better fit by a superposition of two or three model spectra, drawn from the same isochrone, than any single-star model. From an initial sample of ˜20 000 main-sequence targets, we identify ˜2500 binaries in which both the primary and secondary stars contribute detectably to the spectrum, simultaneously fitting for the velocities and stellar parameters of both components. We additionally identify and fit ˜200 triple systems, as well as ˜700 velocity-variable systems in which the secondary does not contribute detectably to the spectrum. Our model simplifies the process of simultaneously fitting single- or multi-epoch spectra with composite models and does not depend on a velocity offset between the two components of a binary, making it sensitive to traditionally undetectable systems with periods of hundreds or thousands of years. In agreement with conventional expectations, almost all the spectrally identified binaries with measured parallaxes fall above the main sequence in the colour-magnitude diagram. We find excellent agreement between spectrally and dynamically inferred mass ratios for the ˜600 binaries in which a dynamical mass ratio can be measured from multi-epoch radial velocities. We obtain full orbital solutions for 64 systems, including 14 close binaries within hierarchical triples. We make available catalogues of stellar parameters, abundances, mass ratios, and orbital parameters.
Variable-Length Computerized Adaptive Testing Using the Higher Order DINA Model
ERIC Educational Resources Information Center
Hsu, Chia-Ling; Wang, Wen-Chung
2015-01-01
Cognitive diagnosis models provide profile information about a set of latent binary attributes, whereas item response models yield a summary report on a latent continuous trait. To utilize the advantages of both models, higher order cognitive diagnosis models were developed in which information about both latent binary attributes and latent…
NASA Astrophysics Data System (ADS)
Yakut, Kadri
2015-08-01
We present a detailed study of KIC 2306740, an eccentric double-lined eclipsing binary system with a pulsating component.Archive Kepler satellite data were combined with newly obtained spectroscopic data with 4.2\\,m William Herschel Telescope(WHT). This allowed us to determine rather precise orbital and physical parameters of this long period, slightly eccentric, pulsating binary system. Duplicity effects are extracted from the light curve in order to estimate pulsation frequencies from the residuals.We modelled the detached binary system assuming non-conservative evolution models with the Cambridge STARS(TWIN) code.
Sun, Xuan; Tong, Xu; Lo, Wai Ting; Mo, Dapeng; Gao, Feng; Ma, Ning; Wang, Bo; Miao, Zhongrong
2017-03-01
We aimed to explore the risk factors of subacute thrombosis (SAT) after intracranial stenting for patients with symptomatic intracranial arterial stenosis. From January to December 2013, all symptomatic intracranial arterial stenosis patients who underwent intracranial stenting in Beijing Tiantan Hospital were prospectively registered into this study. Baseline clinical features and operative data were compared in patients who developed SAT with those who did not. Binary logistic regression model was used to determine the risk factors associated with SAT. Of the 221 patients enrolled, 9 (4.1%) cases had SAT 2 to 8 days after stenting. Binary logistic analysis showed that SAT was related with tandem stenting (odds ratio [OR], 11.278; 95% confidence interval [CI], 2.422-52.519) and antiplatelet resistance (aspirin resistance: OR, 6.267; 95% CI, 1.574-24.952; clopidogrel resistance: OR, 15.526; 95% CI, 3.105-77.626; aspirin and clopidogrel resistance: OR, 12.246; 95% CI, 2.932-51.147; and aspirin or clopidogrel resistance: OR, 11.340; 95% CI, 2.282-56.344). Tandem stenting and antiplatelet resistance might contribute to the development of SAT after intracranial stenting in patients with symptomatic intracranial arterial stenosis. © 2017 American Heart Association, Inc.
Test equality in binary data for a 4 × 4 crossover trial under a Latin-square design.
Lui, Kung-Jong; Chang, Kuang-Chao
2016-10-15
When there are four or more treatments under comparison, the use of a crossover design with a complete set of treatment-receipt sequences in binary data is of limited use because of too many treatment-receipt sequences. Thus, we may consider use of a 4 × 4 Latin square to reduce the number of treatment-receipt sequences when comparing three experimental treatments with a control treatment. Under a distribution-free random effects logistic regression model, we develop simple procedures for testing non-equality between any of the three experimental treatments and the control treatment in a crossover trial with dichotomous responses. We further derive interval estimators in closed forms for the relative effect between treatments. To evaluate the performance of these test procedures and interval estimators, we employ Monte Carlo simulation. We use the data taken from a crossover trial using a 4 × 4 Latin-square design for studying four-treatments to illustrate the use of test procedures and interval estimators developed here. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Bayesian multivariate hierarchical transformation models for ROC analysis.
O'Malley, A James; Zou, Kelly H
2006-02-15
A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.
Bayesian multivariate hierarchical transformation models for ROC analysis
O'Malley, A. James; Zou, Kelly H.
2006-01-01
SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos-Mendez, J; Faddegon, B; Paganetti, H
2015-06-15
Purpose: We used TOPAS (TOPAS wraps and extends Geant4 for medical physicists) to compare Geant4 physics models with published data for neutron shielding calculations. Subsequently, we calculated the source terms and attenuation lengths (shielding data) of the total ambient dose equivalent (TADE) in concrete for neutrons produced by protons in brass. Methods: Stage1: The Bertini and Binary nuclear models available in Geant4 were compared with published attenuation at depth of the TADE in concrete and iron. Stage2: Shielding data of the TADE in concrete was calculated for 50– 200 MeV proton beams on brass. Stage3: Shielding data from Stage2 wasmore » extrapolated for 235 MeV proton beams. This data was used in a point-line-source analytical model to calculate the ambient dose per unit therapeutic dose at two locations inside one treatment room at the Francis H Burr Proton Therapy Center. Finally, we compared these results with experimental data and full TOPAS simulations. Results: At larger angles (∼130o) the TADE in concrete calculated with the Bertini model was about 9 times larger than that calculated with the Binary model. The attenuation length in concrete calculated with the Binary model agreed with published data within 7%±0.4% (statistical uncertainty) for the deepest regions and 5%±0.1% for shallower regions. For iron the agreement was within 3%±0.1%. The ambient dose per therapeutic dose calculated with the Binary model, relative to the experimental data, was a ratio of 0.93±0.16 and 1.23±0.24 for two locations. The analytical model overestimated the dose by four orders of magnitude. These differences are attributed to the complexity of the geometry. Conclusion: The Binary and Bertini models gave comparable results, with the Binary model giving the best agreement with published data at large angle. Shielding data we calculated using the Binary model is useful for fast shielding calculations with other analytical models. This work was supported by National Cancer Institute Grant R01CA140735.« less
Orbital synchronization capture of two binaries emitting gravitational waves
NASA Astrophysics Data System (ADS)
Seto, Naoki
2018-03-01
We study the possibility of orbital synchronization capture for a hierarchical quadrupole stellar system composed by two binaries emitting gravitational waves. Based on a simple model including the mass transfer for white dwarf binaries, we find that the capture might be realized for inter-binary distances less than their gravitational wavelength. We also discuss related intriguing phenomena such as a parasitic relation between the coupled white dwarf binaries and significant reductions of gravitational and electromagnetic radiations.
Mearelli, Filippo; Fiotti, Nicola; Altamura, Nicola; Zanetti, Michela; Fernandes, Giovanni; Burekovic, Ismet; Occhipinti, Alessandro; Orso, Daniele; Giansante, Carlo; Casarsa, Chiara; Biolo, Gianni
2014-10-01
The objective of the study was to determine the accuracy of phospholipase A2 group II (PLA2-II), interferon-gamma-inducible protein 10 (IP-10), angiopoietin-2 (Ang-2), and procalcitonin (PCT) plasma levels in early ruling in/out of sepsis among systemic inflammatory response syndrome (SIRS) patients. Biomarker levels were determined in 80 SIRS patients during the first 4 h of admission to the medical ward. The final diagnosis of sepsis or non-infective SIRS was issued according to good clinical practice. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for sepsis diagnosis were assessed. The optimal biomarker combinations with clinical variables were investigated by logistic regression and decision tree (CART). PLA2-II, IP-10 and PCT, but not Ang-2, were significantly higher in septic (n = 60) than in non-infective SIRS (n = 20) patients (P ≤ 0.001, 0.027, and 0.002, respectively). PLA2-II PPV and NPV were 88 and 86%, respectively. The corresponding figures were 100 and 31% for IP-10, and 93 and 35% for PCT. Binary logistic regression model had 100% PPV and NPV, while manual and software-generated CART reached an overall accuracy of 95 and 98%, respectively, both with 100% NPV. PLA2-II and IP-10 associated with clinical variables in regression or decision tree heterogeneous models may be valuable biomarkers for sepsis diagnosis in SIRS patients admitted to medical ward (MW). Further studies are needed to introduce them into clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geller, Aaron M.; Grijs, Richard de; Li, Chengyuan
2015-05-20
The two Large Magellanic Cloud star clusters, NGC 1805 and NGC 1818, are approximately the same chronological age (∼30 Myr), but show different radial trends in binary frequency. The F-type stars (1.3–2.2 M{sub ⊙}) in NGC 1818 have a binary frequency that decreases toward the core, while the binary frequency for stars of similar mass in NGC 1805 is flat with radius, or perhaps bimodal (with a peak in the core). We show here, through detailed N-body modeling, that both clusters could have formed with the same primordial binary frequency and with binary orbital elements and masses drawn from themore » same distributions (defined from observations of open clusters and the field of our Galaxy). The observed radial trends in binary frequency for both clusters are best matched with models that have initial substructure. Furthermore, both clusters may be evolving along a very similar dynamical sequence, with the key difference that NGC 1805 is dynamically older than NGC 1818. The F-type binaries in NGC 1818 still show evidence of an initial period of rapid dynamical disruptions (which occur preferentially in the core), while NGC 1805 has already begun to recover a higher core binary frequency, owing to mass segregation (which will eventually produce a distribution in binary frequency that rises only toward the core, as is observed in old Milky Way star clusters). This recovery rate increases for higher-mass binaries, and therefore even at one age in one cluster, we predict a similar dynamical sequence in the radial distribution of the binary frequency as a function of binary primary mass.« less
ERIC Educational Resources Information Center
Jaubert, Jean-Noël; Privat, Romain
2014-01-01
The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…
NASA Astrophysics Data System (ADS)
Dvorkin, Irina; Barausse, Enrico
2017-10-01
Massive black hole binaries, formed when galaxies merge, are among the primary sources of gravitational waves targeted by ongoing pulsar timing array (PTA) experiments and the upcoming space-based Laser Interferometer Space Antenna (LISA) interferometer. However, their formation and merger rates are still highly uncertain. Recent upper limits on the stochastic gravitational wave background obtained by PTAs are starting to be in marginal tension with theoretical models for the pairing and orbital evolution of these systems. This tension can be resolved by assuming that these binaries are more eccentric or interact more strongly with the environment (gas and stars) than expected, or by accounting for possible selection biases in the construction of the theoretical models. However, another (pessimistic) possibility is that these binaries do not merge at all, but stall at large (˜pc) separations. We explore this extreme scenario by using a semi-analytic galaxy formation model including massive black holes (isolated and in binaries), and show that future generations of PTAs will detect the stochastic gravitational wave background from the massive black hole binary population within 10-15 yr of observations, even in the `nightmare scenario' in which all binaries stall at the hardening radius. Moreover, we argue that this scenario is too pessimistic, because our model predicts the existence of a subpopulation of binaries with small mass ratios (q ≲ 10-3) that should merge within a Hubble time simply as a result of gravitational wave emission. This subpopulation will be observable with large signal-to-noise ratios by future PTAs thanks to next-generation radio telescopes such as Square Kilometre Array or Five-hundred-meter Aperture Spherical Telescope, and possibly by LISA.
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-05-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Duncan A.; Zimmerman, Peter J.
2010-01-15
Inspiralling compact binaries are expected to circularize before their gravitational-wave signals reach the sensitive frequency band of ground-based detectors. Current searches for gravitational waves from compact binaries using the LIGO and Virgo detectors therefore use circular templates to construct matched filters. Binary formation models have been proposed which suggest that some systems detectable by the LIGO-Virgo network may have non-negligible eccentricity. We investigate the ability of the restricted 3.5 post-Newtonian order TaylorF2 template bank, used by LIGO and Virgo to search for gravitational waves from compact binaries with masses M{<=}35M{sub {center_dot},} to detect binaries with nonzero eccentricity. We model themore » gravitational waves from eccentric binaries using the x-model post-Newtonian formalism proposed by Hinder et al.[I. Hinder, F. Hermann, P. Laguna, and D. Shoemaker, arXiv:0806.1037v1]. We find that small residual eccentricities (e{sub 0} < or approx. 0.05 at 40 Hz) do not significantly affect the ability of current LIGO searches to detect gravitational waves from coalescing compact binaries with total mass 2M{sub {center_dot}<}M<15M{sub {center_dot}.} For eccentricities e{sub 0} > or approx. 0.1, the loss in matched filter signal-to-noise ratio due to eccentricity can be significant and so templates which include eccentric effects will be required to perform optimal searches for such systems.« less
Indoor Astronomy: A Model Eclipsing Binary Star System.
ERIC Educational Resources Information Center
Bloomer, Raymond H., Jr.
1979-01-01
Describes a two-hour physics laboratory experiment modeling the phenomena of eclipsing binary stars developed by the Air Force Academy as part of a week-long laboratory-oriented experience for visiting high school students. (BT)
Hayashida, Kei; Kondo, Yutaka; Hifumi, Toru; Shimazaki, Junya; Oda, Yasutaka; Shiraishi, Shinichiro; Fukuda, Tatsuma; Sasaki, Junichi; Shimizu, Keiki
2018-01-01
We sought to develop a novel risk assessment tool to predict the clinical outcomes after heat-related illness. Prospective, multicenter observational study. Patients who transferred to emergency hospitals in Japan with heat-related illness were registered. The sample was divided into two parts: 60% to construct the score and 40% to validate it. A binary logistic regression model was used to predict hospital admission as a primary outcome. The resulting model was transformed into a scoring system. A total of 3,001 eligible patients were analyzed. There was no difference in variables between development and validation cohorts. Based on the result of a logistic regression model in the development phase (n = 1,805), the J-ERATO score was defined as the sum of the six binary components in the prehospital setting (respiratory rate≥22 /min, Glasgow coma scale<15, systolic blood pressure≤100 mmHg, heart rate≥100 bpm, body temperature≥38°C, and age≥65 y), for a total score ranging from 0 to 6. In the validation phase (n = 1,196), the score had excellent discrimination (C-statistic 0.84; 95% CI 0.79-0.89, p<0.0001) and calibration (P>0.2 by Hosmer-Lemeshow test). The observed proportion of hospital admission increased with increasing J-ERATO score (score = 0, 5.0%; score = 1, 15.0%; score = 2, 24.6%; score = 3, 38.6%; score = 4, 68.0%; score = 5, 85.2%; score = 6, 96.4%). Multivariate analyses showed that the J-ERATO score was an independent positive predictor of hospital admission (adjusted OR, 2.43; 95% CI, 2.06-2.87; P<0.001), intensive care unit (ICU) admission (3.73; 2.95-4.72; P<0.001) and in-hospital mortality (1.65; 1.18-2.32; P = 0.004). The J-ERATO score is simply assessed and can facilitate the identification of patients with higher risk of heat-related hospitalization. This scoring system is also significantly associated with the higher likelihood of ICU admission and in-hospital mortality after heat-related hospitalization.
A Pulsar and White Dwarf in an Unexpected Orbit
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-11-01
Astronomers have discovered a binary system consisting of a low-mass white dwarf and a millisecond pulsar but its eccentric orbit defies all expectations of how such binaries form.Observed orbital periods and binary eccentricities for binary millisecond pulsars. PSR J2234+0511 is the furthest right of the green stars that mark the five known eccentric systems. [Antoniadis et al. 2016]Unusual EccentricityIt would take a low-mass (0.4 solar masses) white dwarf over 100 billion years to form from the evolution of a single star. Since this is longer than the age of the universe, we believe that these lightweights are instead products of binary-star evolution and indeed, we observe many of these stars to still be in binary systems.But the binary evolution that can create a low-mass white dwarf includes a period of mass transfer, in which efficient tidal dissipation damps the systems orbital eccentricity. Because of this, we would expect all systems containing low-mass white dwarfs to have circular orbits.In the past, our observations of low-mass white dwarfmillisecond pulsar binaries have all been consistent with this expectation. But a new detection has thrown a wrench in the works: the unambiguous identification of a low-mass white dwarf thats in an eccentric (e=0.13) orbit with the millisecond pulsar PSR J2234+0511. How could this system have formed?Eliminating Formation ModelsLed by John Antoniadis (Dunlap Institute at University of Toronto), a team of scientists has used newly obtained optical photometry (from the Sloan Digital Sky Survey) and spectroscopy (from the Very Large Telescope in Chile) of the white dwarf to confirm the identification of this system.Antoniadis and collaborators then use measurements of the bodies masses (0.28 and 1.4 solar masses for the white dwarf and pulsar, respectively) and velocities, and constraints on the white dwarfs temperature, radius and surface gravity, to address three proposed models for the formation of this system.The 3D motion of the pulsar (black solid lines; current position marked with diamond) in our galaxy over the past 1.5 Gyr. This motion is typical for low-mass X-ray binary descendants, favoring a binary-evolution model over a 3-body-interaction model. [Antoniadis et al. 2016]In the first model, the eccentric binary was created via adynamic three-body formation channel. This possibility is deemed unlikely, as the white-dwarf properties and all the kinematic properties of the system point to normal binary evolution.In the secondmodel, the binary system gains its high eccentricity after mass transfer ends, when the pulsar progenitor experiences a spontaneous phase transition. The authors explore two options for this: one in which the neutron star implodes into a strange-quark star, and the other in which an over-massive white dwarf suffers a delayed collapse into a neutron star. Both cases are deemed unlikely, because the mass inferred for the pulsar progenitor is not consistent with either model.In the third model, the system forms a circumbinary disk fueled by material escaping the proto-white dwarf. After mass transfer has ended, interactions between the binary and its disk gradually increase the eccentricity of the system, pumping it up to what we observe today. All of the properties of the system measured by Antoniadis and collaborators are thus far consistent with this model.Further observations of this system and systems like it (several others have been detected, though not yet confirmed) will help determine whether binary evolution combined with interactions with a disk can indeed explain the formation of this unexpectedly eccentricsystem.CitationJohn Antoniadis et al 2016 ApJ 830 36. doi:10.3847/0004-637X/830/1/36
Estimating gravitational radiation from super-emitting compact binary systems
NASA Astrophysics Data System (ADS)
Hanna, Chad; Johnson, Matthew C.; Lehner, Luis
2017-06-01
Binary black hole mergers are among the most violent events in the Universe, leading to extreme warping of spacetime and copious emission of gravitational radiation. Even though black holes are the most compact objects they are not necessarily the most efficient emitters of gravitational radiation in binary systems. The final black hole resulting from a binary black hole merger retains a significant fraction of the premerger orbital energy and angular momentum. A nonvacuum system can in principle shed more of this energy than a black hole merger of equivalent mass. We study these super-emitters through a toy model that accounts for the possibility that the merger creates a compact object that retains a long-lived time-varying quadrupole moment. This toy model may capture the merger of (low mass) neutron stars, but it may also be used to consider more exotic compact binaries. We hope that this toy model can serve as a guide to more rigorous numerical investigations into these systems.
NASA Astrophysics Data System (ADS)
Valsecchi, Francesca
Binary star systems hosting black holes, neutron stars, and white dwarfs are unique laboratories for investigating both extreme physical conditions, and stellar and binary evolution. Black holes and neutron stars are observed in X-ray binaries, where mass accretion from a stellar companion renders them X-ray bright. Although instruments like Chandra have revolutionized the field of X-ray binaries, our theoretical understanding of their origin and formation lags behind. Progress can be made by unravelling the evolutionary history of observed systems. As part of my thesis work, I have developed an analysis method that uses detailed stellar models and all the observational constraints of a system to reconstruct its evolutionary path. This analysis models the orbital evolution from compact-object formation to the present time, the binary orbital dynamics due to explosive mass loss and a possible kick at core collapse, and the evolution from the progenitor's Zero Age Main Sequence to compact-object formation. This method led to a theoretical model for M33 X-7, one of the most massive X-ray binaries known and originally marked as an evolutionary challenge. Compact objects are also expected gravitational wave (GW) sources. In particular, double white dwarfs are both guaranteed GW sources and observed electromagnetically. Although known systems show evidence of tidal deformation and a successful GW astronomy requires realistic models of the sources, detached double white dwarfs are generally approximated to point masses. For the first time, I used realistic models to study tidally-driven periastron precession in eccentric binaries. I demonstrated that its imprint on the GW signal yields constrains on the components' masses and that the source would be misclassified if tides are neglected. Beyond this adiabatic precession, tidal dissipation creates a sink of orbital angular momentum. Its efficiency is strongest when tides are dynamic and excite the components' free oscillation modes. Accounting for this effect will determine whether our interpretation of current and future observations will constrain the sources' true physical properties. To investigate dynamic tides I have developed CAFein, a novel code that calculates forced non-adiabatic stellar oscillations using a highly stable and efficient numerical method.
Factors Affecting Code Status in a University Hospital Intensive Care Unit
ERIC Educational Resources Information Center
Van Scoy, Lauren Jodi; Sherman, Michael
2013-01-01
The authors collected data on diagnosis, hospital course, and end-of-life preparedness in patients who died in the intensive care unit (ICU) with "full code" status (defined as receiving cardiopulmonary resuscitation), compared with those who didn't. Differences were analyzed using binary and stepwise logistic regression. They found no…
Impact of Perceived Risk and Friend Influence on Alcohol and Marijuana Use among Students
ERIC Educational Resources Information Center
Merianos, Ashley L.; Rosen, Brittany L.; Montgomery, LaTrice; Barry, Adam E.; Smith, Matthew Lee
2017-01-01
We performed a secondary analysis of Adolescent Health Risk Behavior Survey data (N=937), examining associations between lifetime alcohol and marijuana use with intrapersonal (i.e., risk perceptions) and interpersonal (e.g., peer approval and behavior) factors. Multinomial and binary logistic regression analyses contend students reporting lifetime…
Inferring Binary and Trinary Stellar Populations in Photometric and Astrometric Surveys
NASA Astrophysics Data System (ADS)
Widmark, Axel; Leistedt, Boris; Hogg, David W.
2018-04-01
Multiple stellar systems are ubiquitous in the Milky Way but are often unresolved and seen as single objects in spectroscopic, photometric, and astrometric surveys. However, modeling them is essential for developing a full understanding of large surveys such as Gaia and connecting them to stellar and Galactic models. In this paper, we address this problem by jointly fitting the Gaia and Two Micron All Sky Survey photometric and astrometric data using a data-driven Bayesian hierarchical model that includes populations of binary and trinary systems. This allows us to classify observations into singles, binaries, and trinaries, in a robust and efficient manner, without resorting to external models. We are able to identify multiple systems and, in some cases, make strong predictions for the properties of their unresolved stars. We will be able to compare such predictions with Gaia Data Release 4, which will contain astrometric identification and analysis of binary systems.
NASA Astrophysics Data System (ADS)
Shi, Yu; Wang, Yue; Xu, Shijie
2018-04-01
The motion of a massless particle in the gravity of a binary asteroid system, referred as the restricted full three-body problem (RF3BP), is fundamental, not only for the evolution of the binary system, but also for the design of relevant space missions. In this paper, equilibrium points and associated periodic orbit families in the gravity of a binary system are investigated, with the binary (66391) 1999 KW4 as an example. The polyhedron shape model is used to describe irregular shapes and corresponding gravity fields of the primary and secondary of (66391) 1999 KW4, which is more accurate than the ellipsoid shape model in previous studies and provides a high-fidelity representation of the gravitational environment. Both of the synchronous and non-synchronous states of the binary system are considered. For the synchronous binary system, the equilibrium points and their stability are determined, and periodic orbit families emanating from each equilibrium point are generated by using the shooting (multiple shooting) method and the homotopy method, where the homotopy function connects the circular restricted three-body problem and RF3BP. In the non-synchronous binary system, trajectories of equivalent equilibrium points are calculated, and the associated periodic orbits are obtained by using the homotopy method, where the homotopy function connects the synchronous and non-synchronous systems. Although only the binary (66391) 1999 KW4 is considered, our methods will also be well applicable to other binary systems with polyhedron shape data. Our results on equilibrium points and associated periodic orbits provide general insights into the dynamical environment and orbital behaviors in proximity of small binary asteroids and enable the trajectory design and mission operations in future binary system explorations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Michael C.; Dupuy, Trent J.; Leggett, S. K., E-mail: mliu@ifa.hawaii.ed
Highly unequal-mass ratio binaries are rare among field brown dwarfs, with the mass ratio distribution of the known census described by q {sup (4.9{+-}0.7)}. However, such systems enable a unique test of the joint accuracy of evolutionary and atmospheric models, under the constraint of coevality for the individual components (the 'isochrone test'). We carry out this test using two of the most extreme field substellar binaries currently known, the T1 + T6 {epsilon} Ind Bab binary and a newly discovered 0.''14 T2.0 + T7.5 binary, 2MASS J12095613-1004008AB, identified with Keck laser guide star adaptive optics. The latter is the mostmore » extreme tight binary resolved to date (q {approx} 0.5). Based on the locations of the binary components on the Hertzsprung-Russell (H-R) diagram, current models successfully indicate that these two systems are coeval, with internal age differences of log(age) = -0.8 {+-} 1.3(-1.0{sup +1.2}{sub -1.3}) dex and 0.5{sup +0.4}{sub -0.3}(0.3{sup +0.3}{sub -0.4}) dex for 2MASS J1209-1004AB and {epsilon} Ind Bab, respectively, as inferred from the Lyon (Tucson) models. However, the total mass of {epsilon} Ind Bab derived from the H-R diagram ({approx} 80 M{sub Jup} using the Lyon models) is strongly discrepant with the reported dynamical mass. This problem, which is independent of the assumed age of the {epsilon} Ind Bab system, can be explained by a {approx} 50-100 K systematic error in the model atmosphere fitting, indicating slightly warmer temperatures for both components; bringing the mass determinations from the H-R diagram and the visual orbit into consistency leads to an inferred age of {approx} 6 Gyr for {epsilon} Ind Bab, older than previously assumed. Overall, the two T dwarf binaries studied here, along with recent results from T dwarfs in age and mass benchmark systems, yield evidence for small ({approx}100 K) errors in the evolutionary models and/or model atmospheres, but not significantly larger. Future parallax, resolved spectroscopy, and dynamical mass measurements for 2MASS J1209-1004AB will enable a more stringent application of the isochrone test. Finally, the binary nature of this object reduces its utility as the primary T3 near-IR spectral typing standard; we suggest SDSS J1206+2813 as a replacement.« less
The incidence of stellar mergers and mass gainers among massive stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Mink, S. E.; Sana, H.; Langer, N.
2014-02-10
Because the majority of massive stars are born as members of close binary systems, populations of massive main-sequence stars contain stellar mergers and products of binary mass transfer. We simulate populations of massive stars accounting for all major binary evolution effects based on the most recent binary parameter statistics and extensively evaluate the effect of model uncertainties. Assuming constant star formation, we find that 8{sub −4}{sup +9}% of a sample of early-type stars are the products of a merger resulting from a close binary system. In total we find that 30{sub −15}{sup +10}% of massive main-sequence stars are the productsmore » of binary interaction. We show that the commonly adopted approach to minimize the effects of binaries on an observed sample by excluding systems detected as binaries through radial velocity campaigns can be counterproductive. Systems with significant radial velocity variations are mostly pre-interaction systems. Excluding them substantially enhances the relative incidence of mergers and binary products in the non-radial velocity variable sample. This poses a challenge for testing single stellar evolutionary models. It also raises the question of whether certain peculiar classes of stars, such as magnetic O stars, are the result of binary interaction and it emphasizes the need to further study the effect of binarity on the diagnostics that are used to derive the fundamental properties (star-formation history, initial mass function, mass-to-light ratio) of stellar populations nearby and at high redshift.« less
Dynamics of rotationally fissioned asteroids: Source of observed small asteroid systems
NASA Astrophysics Data System (ADS)
Jacobson, Seth A.; Scheeres, Daniel J.
2011-07-01
We present a model of near-Earth asteroid (NEA) rotational fission and ensuing dynamics that describes the creation of synchronous binaries and all other observed NEA systems including: doubly synchronous binaries, high- e binaries, ternary systems, and contact binaries. Our model only presupposes the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect, "rubble pile" asteroid geophysics, and gravitational interactions. The YORP effect torques a "rubble pile" asteroid until the asteroid reaches its fission spin limit and the components enter orbit about each other (Scheeres, D.J. [2007]. Icarus 189, 370-385). Non-spherical gravitational potentials couple the spin states to the orbit state and chaotically drive the system towards the observed asteroid classes along two evolutionary tracks primarily distinguished by mass ratio. Related to this is a new binary process termed secondary fission - the secondary asteroid of the binary system is rotationally accelerated via gravitational torques until it fissions, thus creating a chaotic ternary system. The initially chaotic binary can be stabilized to create a synchronous binary by components of the fissioned secondary asteroid impacting the primary asteroid, solar gravitational perturbations, and mutual body tides. These results emphasize the importance of the initial component size distribution and configuration within the parent asteroid. NEAs may go through multiple binary cycles and many YORP-induced rotational fissions during their approximately 10 Myr lifetime in the inner Solar System. Rotational fission and the ensuing dynamics are responsible for all NEA systems including the most commonly observed synchronous binaries.
Alexander, Paul E; Bonner, Ashley J; Agarwal, Arnav; Li, Shelly-Anne; Hariharan, Abishek; Izhar, Zain; Bhatnagar, Neera; Alba, Carolina; Akl, Elie A; Fei, Yutong; Guyatt, Gordon H; Beyene, Joseph
2016-06-01
Prior studies regarding whether single-center trial estimates are larger than multi-center are equivocal. We examined the extent to which single-center trials yield systematically larger effects than multi-center trials. We searched the 119 core clinical journals and the Cochrane Database of Systematic Reviews for meta-analyses (MAs) of randomized controlled trials (RCTs) published during 2012. In this meta-epidemiologic study, for binary variables, we computed the pooled ratio of ORs (RORs), and for continuous outcomes mean difference in standardized mean differences (SMDs), we conducted weighted random-effects meta-regression and random-effects MA modeling. Our primary analyses were restricted to MAs that included at least five RCTs and in which at least 25% of the studies used each of single trial center (SC) and more trial center (MC) designs. We identified 81 MAs for the odds ratio (OR) and 43 for the SMD outcome measures. Based on our analytic plan, our primary analysis (core) is based on 25 MAs/241 RCTs (binary outcome) and 18 MAs/173 RCTs (continuous outcome). Based on the core analysis, we found no difference in magnitude of effect between SC and MC for binary outcomes [RORs: 1.02; 95% confidence interval (CI): 0.83, 1.24; I(2) 20.2%]. Effect sizes were systematically larger for SC than MC for the continuous outcome measure (mean difference in SMDs: -0.13; 95% CI: -0.21, -0.05; I(2) 0%). Our results do not support prior findings of larger effects in SC than MC trials addressing binary outcomes but show a very similar small increase in effect in SC than MC trials addressing continuous outcomes. Authors of systematic reviews would be wise to include all trials irrespective of SC vs. MC design and address SC vs. MC status as a possible explanation of heterogeneity (and consider sensitivity analyses). Copyright © 2015 Elsevier Inc. All rights reserved.
Lima-Serrano, Marta; Martínez-Montilla, José Manuel; Vargas-Martínez, Ana Magdalena; Zafra-Agea, José Antonio; Lima-Rodríguez, Joaquín Salvador
2018-02-27
To know the variables present in primary and secondary school students who do not smoke or intend to smoke from a positive health model. Cross-sectional study with 482 students from Andalusia and Catalonia using a validated questionnaire (ESFA and PASE project). Binary logistic regression analysis was performed. Those who did not intend to smoke viewed smoking unfavourably and had high self-efficacy (p <0.001). In non-consumers, the most associated variables were attitude, social model (p <0.001), and self-efficacy (p =0.005). The results show motivational factors present in students who do not smoke and do not intend to do so. Attitude and self-efficacy are strongly associated with intention and behaviour. This information might be useful for developing positive health promotion strategies from a salutogenesis approach. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Radial Velocities of 41 Kepler Eclipsing Binaries
NASA Astrophysics Data System (ADS)
Matson, Rachel A.; Gies, Douglas R.; Guo, Zhao; Williams, Stephen J.
2017-12-01
Eclipsing binaries are vital for directly determining stellar parameters without reliance on models or scaling relations. Spectroscopically derived parameters of detached and semi-detached binaries allow us to determine component masses that can inform theories of stellar and binary evolution. Here we present moderate resolution ground-based spectra of stars in close binary systems with and without (detected) tertiary companions observed by NASA’s Kepler mission and analyzed for eclipse timing variations. We obtain radial velocities and spectroscopic orbits for five single-lined and 35 double-lined systems, and confirm one false positive eclipsing binary. For the double-lined spectroscopic binaries, we also determine individual component masses and examine the mass ratio {M}2/{M}1 distribution, which is dominated by binaries with like-mass pairs and semi-detached classical Algol systems that have undergone mass transfer. Finally, we constrain the mass of the tertiary component for five double-lined binaries with previously detected companions.
McManus, Kathleen A.; Rhodes, Anne; Bailey, Steven; Yerkes, Lauren; Engelhard, Carolyn L.; Ingersoll, Karen S.; Stukenborg, George J.; Dillingham, Rebecca
2016-01-01
Background. With the Patient Protection and Affordable Care Act, many state AIDS Drug Assistance Programs (ADAPs) shifted their healthcare delivery model from direct medication provision to purchasing qualified health plans (QHPs). The objective of this study was to characterize the demographic and healthcare delivery factors associated with Virginia ADAP clients' QHP enrollment and to assess the relationship between QHP coverage and human immunodeficiency virus (HIV) viral suppression. Methods. The cohort included persons living with HIV who were enrolled in the Virginia ADAP (n = 3933). Data were collected from 1 January 2013 through 31 December 2014. Multivariable binary logistic regression was conducted to assess for associations with QHP enrollment and between QHP coverage and viral load (VL) suppression. Results. In the cohort, 47.1% enrolled in QHPs, and enrollment varied significantly based on demographic and healthcare delivery factors. In multivariable binary logistic regression, controlling for time, age, sex, race/ethnicity, and region, factors significantly associated with achieving HIV viral suppression included QHP coverage (adjusted odds ratio, 1.346; 95% confidence interval, 1.041–1.740; P = .02), an initially undetectable VL (2.809; 2.174–3.636; P < .001), HIV rather than AIDS disease status (1.377; 1.049–1.808; P = .02), and HIV clinic (P < .001). Conclusions. QHP coverage was associated with viral suppression, an essential outcome for individuals and for public health. Promoting QHP coverage in clinics that provide care to persons living with HIV may offer a new opportunity to increase rates of viral suppression. PMID:27143661
Payandeh, Mehrdad; Sadeghi, Masoud; Sadeghi, Edris; Madani, Seyed-Hamid
2016-01-01
In breast cancer (BC), it has been suggested that nuclear overexpression of p53 protein might be an indicator of poor prognosis. The aim of the current study was to evaluate the expression of p53 BC in Kurdish women from the West of Iran and its correlation with other clinicopathology figures. In the present retrospective study, 231 patients were investigated for estrogen receptor (ER) and progesterone receptor (PR) positivity, defined as ≥10% positive tumor cells with nuclear staining. A binary logistic regression model was selected using Akaike Information Criteria (AIC) in stepwise selection for determination of important factors. ER, PR, the human epidermal growth factor receptor 2 (HER2) and p53 were positive in 58.4%, 55.4%, 59.7% and 45% of cases, respectively. Ki67 index was divided into two groups: 54.5% had Ki67<20% and 45.5% had Ki67 ≥20%. Of 214 patients, 137(64%) had lymph node metastasis and of 186 patients, 122(65.6%) had vascular invasion. Binary logistic regression analysis showed that there was inverse significant correlation between lymph node metastasis (P=0.008, OR 0.120 and 95%CI 0.025-0.574), ER status (P=0.006, OR 0.080, 95%CI 0.014-0.477) and a direct correlation between HER2 (P=005, OR 3.047, 95%CI 1.407-6.599) with the expression of p53. As in a number of studies, expression of p53 had a inverse correlation with lymph node metastasis and ER status and also a direct correlation with HER2 status. Also, p53-positivity is more likely in triple negative BC compared to other subtypes.
Blake, Khandis R; Dixson, Barnaby J W; O'Dean, Siobhan M; Denson, Thomas F
2017-04-01
Several studies report that wearing red clothing enhances women's attractiveness and signals sexual proceptivity to men. The associated hypothesis that women will choose to wear red clothing when fertility is highest, however, has received mixed support from empirical studies. One possible cause of these mixed findings may be methodological. The current study aimed to replicate recent findings suggesting a positive association between hormonal profiles associated with high fertility (high estradiol to progesterone ratios) and the likelihood of wearing red. We compared the effect of the estradiol to progesterone ratio on the probability of wearing: red versus non-red (binary logistic regression); red versus neutral, black, blue, green, orange, multi-color, and gray (multinomial logistic regression); and each of these same colors in separate binary models (e.g., green versus non-green). Red versus non-red analyses showed a positive trend between a high estradiol to progesterone ratio and wearing red, but the effect only arose for younger women and was not robust across samples. We found no compelling evidence for ovarian hormones increasing the probability of wearing red in the other analyses. However, we did find that the probability of wearing neutral was positively associated with the estradiol to progesterone ratio, though the effect did not reach conventional levels of statistical significance. Findings suggest that although ovarian hormones may affect younger women's preference for red clothing under some conditions, the effect is not robust when differentiating amongst other colors of clothing. In addition, the effect of ovarian hormones on clothing color preference may not be specific to the color red. Copyright © 2017 Elsevier Inc. All rights reserved.
[Relationship between family functioning and lifestyle in school-age adolescents].
Lima-Serrano, Marta; Guerra-Martín, María Dolores; Lima-Rodríguez, Joaquín Salvador
Risk behaviors in adolescents can lead to serious disorders, therefore the objectives of this work are to characterize the lifestyles of teenagers about substance use, sex, and road safety, and to meet socio-demographic factors associated with these. A cross-sectional, descriptive and correlational study was conducted with 204 school-age-children from 12 to 17 years, in 2013. They were given a validated questionnaire about sociodemographic, family functioning, and lifestyles such as substance abuse, sexual intercourse and road safety. A descriptive and multivariate analysis was performed by using multiple linear regression in the case of quantitative dependent variables, and binary logistic regression models in the case of binary categories. Data analysis was based on SPSS 20.0, with a significance level of p<0.05. 32.4% of students had smoked, and 61.3% had drunk alcohol. 26% of adolescent between 14-17 years had sexual intercourse; the average age of the first sexual intercourse was 14.9 years. 85.2% used condoms. 94.6% respected traffic signs, 77.5% used to wear a seat belt and 81.9% a helmet. Family functioning, as protective factor, was the variable more frequently associated to risk behaviour: smoking (OR=7.06, p=.000), alcohol drinking (OR=3.97, p=.008), sexual intercourse (OR=3.67, p=.041), and road safety (β=1.82, p=.000). According the results, age, gender and family functioning are the main factors associated with the adoption of risk behaviors. This information is important for the development of public health policies, for instance health promotion at schools. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi
2017-11-02
Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.
Correlates of anal sex roles among Malay and Chinese MSM in Kuala Lumpur, Malaysia.
Dangerfield, Derek T; Gravitt, Patti; Rompalo, Anne M; Tai, Raymond; Lim, Sin How
2016-03-01
Identifying roles for anal sex is an important issue for populations of MSM. We describe the prevalence of identifying as being 'top', 'bottom', 'versatile', or 'don't know/not applicable' among Malay and Chinese MSM in Kuala Lumpur, Malaysia, and behavioural outcomes according to these labels for sexual role identity. Data analysis was conducted on a survey administered during weekly outreach throughout Kuala Lumpur in 2012. Pearson's Chi square tests were used to compare demographic and behavioural characteristics of MSM who reported roles for anal sex. Binary logistic regression was used to explore the odds of behavioural outcomes among MSM who identified as 'bottom', 'versatile,' and 'don't know' compared to MSM who reported that 'top' was their sexual role. Labels for anal sex roles were significantly associated with condom use for last anal sex. Among MSM who used labels for anal sex roles, MSM who identified as 'bottom' had highest level of not using condoms for last anal sex (24.1%, p = .045). In binary logistic regression model, identifying as 'top' was significantly associated with reporting using a condom during last anal sex and reported consistent condom use for anal sex in the past six months (p = .039 and .017, respectively). With regard to sexual role identity, some MSM may be a part of a special subgroup of at-risk men to be targeted. Future research should evaluate the origins, meanings, and perceptions of these labels, and the developmental process of how these MSM identify with any of these categories. Research should also uncover condom use decision making with regard to these labels for sexual positioning. © The Author(s) 2016.
Disposal of children's stools and its association with childhood diarrhea in India.
Bawankule, Rahul; Singh, Abhishek; Kumar, Kaushalendra; Pedgaonkar, Sarang
2017-01-05
Children's stool disposal is often overlooked in sanitation programs of any country. Unsafe disposal of children's stool makes children susceptible to many diseases that transmit through faecal-oral route. Therefore, the study aims to examine the magnitude of unsafe disposal of children's stools in India, the factors associated with it and finally its association with childhood diarrhea. Data from the third round of the National Family Health Survey (NFHS-3) conducted in 2005-06 is used to carry out the analysis. The binary logistic regression model is used to examine the factors associated with unsafe disposal of children's stool. Binary logistic regression is also used to examine the association between unsafe disposal of children's stool and childhood diarrhea. Overall, stools of 79% of children in India were disposed of unsafely. The urban-rural gap in the unsafe disposal of children's stool was wide. Mother's illiteracy and lack of exposure to media, the age of the child, religion and caste/tribe of the household head, wealth index, access to toilet facility and urban-rural residence were statistically associated with unsafe disposal of stool. The odds of diarrhea in children whose stools were disposed of unsafely was estimated to be 11% higher (95% CI: 1.01-1.21) than that of children whose stools were disposed of safely. An increase in the unsafe disposal of children's stool in the community also increased the risk of diarrhea in children. We found significant statistical association between children's stool disposal and diarrhea. Therefore, gains in reduction of childhood diarrhea can be achieved in India through the complete elimination of unsafe disposal of children's stools. The sanitation programmes currently being run in India must also focus on safe disposal of children's stool.
Liang, Han; Cheng, Jing; Shen, Xingrong; Chen, Penglai; Tong, Guixian; Chai, Jing; Li, Kaichun; Xie, Shaoyu; Shi, Yong; Wang, Debin; Sun, Yehuan
2015-02-01
This study aims at examining the effects of stressful life events on risk of impaired fasting glucose among left-behind farmers in rural China. The study collected data about stressful life events, family history of diabetes, lifestyle, demographics and minimum anthropometrics from left-behind famers aged 40-70 years. Calculated life event index was applied to assess the combined effects of stressful life events experienced by the left-behind farmers and its association with impaired fasting glucose was estimated using binary logistic regression models. The prevalence of abnormal fasting glucose was 61.4% by American Diabetes Association (ADA) standard and 32.4% by World Health Organization (WHO) standard. Binary logistic regression analysis revealed a coefficient of 0.033 (P<.001) by ADA standard or 0.028 (P<.001) by WHO standard between impaired fasting glucose and life event index. The overall odds ratios of impaired glucose for the second, third and fourth (highest) versus the first (lowest) quartile of life event index were 1.419 [95% CI=(1.173, 1.717)], 1.711 [95% CI=(1.413, 2.071)] and 1.957 [95% CI=(1.606, 2.385)] respectively by ADA standard. When more and more confounding factors were controlled for, these odds ratios remained statistically significant though decreased to a small extent. The left-behind farmers showed over two-fold prevalence rate of pre-diabetes than that of the nation's average and their risk of impaired fasting glucose was positively associated with stressful life events in a dose-dependent way. Both the population studied and their life events merit special attention. Copyright © 2014 Elsevier Inc. All rights reserved.
Mass transfer in white dwarf-neutron star binaries
NASA Astrophysics Data System (ADS)
Bobrick, Alexey; Davies, Melvyn B.; Church, Ross P.
2017-05-01
We perform hydrodynamic simulations of mass transfer in binaries that contain a white dwarf and a neutron star (WD-NS binaries), and measure the specific angular momentum of material lost from the binary in disc winds. By incorporating our results within a long-term evolution model, we measure the long-term stability of mass transfer in these binaries. We find that only binaries containing helium white dwarfs (WDs) with masses less than a critical mass of MWD, crit = 0.2 M⊙ undergo stable mass transfer and evolve into ultracompact X-ray binaries. Systems with higher mass WDs experience unstable mass transfer, which leads to tidal disruption of the WD. Our low critical mass compared to the standard jet-only model of mass-loss arises from the efficient removal of angular momentum in the mechanical disc winds, which develop at highly super-Eddington mass-transfer rates. We find that the eccentricities expected for WD-NS binaries when they come into contact do not affect the loss of angular momentum, and can only affect the long-term evolution if they change on shorter time-scales than the mass-transfer rate. Our results are broadly consistent with the observed numbers of both ultracompact X-ray binaries and radio pulsars with WD companions. The observed calcium-rich gap transients are consistent with the merger rate of unstable systems with higher mass WDs.
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Chatterjee, Sourav; Rasio, Frederic A.
2016-04-01
The recent discovery of GW150914, the binary black hole merger detected by Advanced LIGO, has the potential to revolutionize observational astrophysics. But to fully utilize this new window into the Universe, we must compare these new observations to detailed models of binary black hole formation throughout cosmic time. Expanding upon our previous work [C. L. Rodriguez, M. Morscher, B. Pattabiraman, S. Chatterjee, C.-J. Haster, and F. A. Rasio, Phys. Rev. Lett. 115, 051101 (2015).], we study merging binary black holes formed in globular clusters using our Monte Carlo approach to stellar dynamics. We have created a new set of 52 cluster models with different masses, metallicities, and radii to fully characterize the binary black hole merger rate. These models include all the relevant dynamical processes (such as two-body relaxation, strong encounters, and three-body binary formation) and agree well with detailed direct N -body simulations. In addition, we have enhanced our stellar evolution algorithms with updated metallicity-dependent stellar wind and supernova prescriptions, allowing us to compare our results directly to the most recent population synthesis predictions for merger rates from isolated binary evolution. We explore the relationship between a cluster's global properties and the population of binary black holes that it produces. In particular, we derive a numerically calibrated relationship between the merger times of ejected black hole binaries and a cluster's mass and radius. With our improved treatment of stellar evolution, we find that globular clusters can produce a significant population of massive black hole binaries that merge in the local Universe. We explore the masses and mass ratios of these binaries as a function of redshift, and find a merger rate of ˜5 Gpc-3yr-1 in the local Universe, with 80% of sources having total masses from 32 M⊙ to 64 M⊙. Under standard assumptions, approximately one out of every seven binary black hole mergers in the local Universe will have originated in a globular cluster, but we also explore the sensitivity of this result to different assumptions for binary stellar evolution. If black holes were born with significant natal kicks, comparable to those of neutron stars, then the merger rate of binary black holes from globular clusters would be comparable to that from the field, with approximately 1 /2 of mergers originating in clusters. Finally we point out that population synthesis results for the field may also be modified by dynamical interactions of binaries taking place in dense star clusters which, unlike globular clusters, dissolved before the present day.
NASA Astrophysics Data System (ADS)
Vrabec, Jadran; Kedia, Gaurav Kumar; Buchhauser, Ulrich; Meyer-Pittroff, Roland; Hasse, Hans
2009-02-01
For the design and optimization of CO 2 recovery from alcoholic fermentation processes by distillation, models for vapor-liquid equilibria (VLE) are needed. Two such thermodynamic models, the Peng-Robinson equation of state (EOS) and a model based on Henry's law constants, are proposed for the ternary mixture N 2 + O 2 + CO 2. Pure substance parameters of the Peng-Robinson EOS are taken from the literature, whereas the binary parameters of the Van der Waals one-fluid mixing rule are adjusted to experimental binary VLE data. The Peng-Robinson EOS describes both binary and ternary experimental data well, except at high pressures approaching the critical region. A molecular model is validated by simulation using binary and ternary experimental VLE data. On the basis of this model, the Henry's law constants of N 2 and O 2 in CO 2 are predicted by molecular simulation. An easy-to-use thermodynamic model, based on those Henry's law constants, is developed to reliably describe the VLE in the CO 2-rich region.
A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test
NASA Technical Reports Server (NTRS)
Messer, Bradley P.
2004-01-01
Propulsion ground test facilities face the daily challenges of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Due to budgetary and schedule constraints, NASA and industry customers are pushing to test more components, for less money, in a shorter period of time. As these new rocket engine component test programs are undertaken, the lack of technology maturity in the test articles, combined with pushing the test facilities capabilities to their limits, tends to lead to an increase in facility breakdowns and unsuccessful tests. Over the last five years Stennis Space Center's propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and broken numerous test facility and test article parts. While various initiatives have been implemented to provide better propulsion test techniques and improve the quality, reliability, and maintainability of goods and parts used in the propulsion test facilities, unexpected failures during testing still occur quite regularly due to the harsh environment in which the propulsion test facilities operate. Previous attempts at modeling the lifecycle of a propulsion component test project have met with little success. Each of the attempts suffered form incomplete or inconsistent data on which to base the models. By focusing on the actual test phase of the tests project rather than the formulation, design or construction phases of the test project, the quality and quantity of available data increases dramatically. A logistic regression model has been developed form the data collected over the last five years, allowing the probability of successfully completing a rocket propulsion component test to be calculated. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),..,X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure. Logistic regression has primarily been used in the fields of epidemiology and biomedical research, but lends itself to many other applications. As indicated the use of logistic regression is not new, however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from the models provide project managers with insight and confidence into the affectivity of rocket engine component ground test projects. The initial success in modeling rocket propulsion ground test projects clears the way for more complex models to be developed in this area.
Austin, Peter C; Wagner, Philippe; Merlo, Juan
2017-03-15
Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster-specific random effects which allow one to partition the total individual variance into between-cluster variation and between-individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time-to-event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., 'frailty') Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Wagner, Philippe; Merlo, Juan
2016-01-01
Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster‐specific random effects which allow one to partition the total individual variance into between‐cluster variation and between‐individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time‐to‐event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., ‘frailty’) Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27885709
Dinç, Erdal; Ertekin, Zehra Ceren
2016-01-01
An application of parallel factor analysis (PARAFAC) and three-way partial least squares (3W-PLS1) regression models to ultra-performance liquid chromatography-photodiode array detection (UPLC-PDA) data with co-eluted peaks in the same wavelength and time regions was described for the multicomponent quantitation of hydrochlorothiazide (HCT) and olmesartan medoxomil (OLM) in tablets. Three-way dataset of HCT and OLM in their binary mixtures containing telmisartan (IS) as an internal standard was recorded with a UPLC-PDA instrument. Firstly, the PARAFAC algorithm was applied for the decomposition of three-way UPLC-PDA data into the chromatographic, spectral and concentration profiles to quantify the concerned compounds. Secondly, 3W-PLS1 approach was subjected to the decomposition of a tensor consisting of three-way UPLC-PDA data into a set of triads to build 3W-PLS1 regression for the analysis of the same compounds in samples. For the proposed three-way analysis methods in the regression and prediction steps, the applicability and validity of PARAFAC and 3W-PLS1 models were checked by analyzing the synthetic mixture samples, inter-day and intra-day samples, and standard addition samples containing HCT and OLM. Two different three-way analysis methods, PARAFAC and 3W-PLS1, were successfully applied to the quantitative estimation of the solid dosage form containing HCT and OLM. Regression and prediction results provided from three-way analysis were compared with those obtained by traditional UPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.
Sargolzaie, Narjes; Miri-Moghaddam, Ebrahim
2014-01-01
The most common differential diagnosis of β-thalassemia (β-thal) trait is iron deficiency anemia. Several red blood cell equations were introduced during different studies for differential diagnosis between β-thal trait and iron deficiency anemia. Due to genetic variations in different regions, these equations cannot be useful in all population. The aim of this study was to determine a native equation with high accuracy for differential diagnosis of β-thal trait and iron deficiency anemia for the Sistan and Baluchestan population by logistic regression analysis. We selected 77 iron deficiency anemia and 100 β-thal trait cases. We used binary logistic regression analysis and determined best equations for probability prediction of β-thal trait against iron deficiency anemia in our population. We compared diagnostic values and receiver operative characteristic (ROC) curve related to this equation and another 10 published equations in discriminating β-thal trait and iron deficiency anemia. The binary logistic regression analysis determined the best equation for best probability prediction of β-thal trait against iron deficiency anemia with area under curve (AUC) 0.998. Based on ROC curves and AUC, Green & King, England & Frazer, and then Sirdah indices, respectively, had the most accuracy after our equation. We suggest that to get the best equation and cut-off in each region, one needs to evaluate specific information of each region, specifically in areas where populations are homogeneous, to provide a specific formula for differentiating between β-thal trait and iron deficiency anemia.
ERIC Educational Resources Information Center
Roberts, James S.; Laughlin, James E.
1996-01-01
A parametric item response theory model for unfolding binary or graded responses is developed. The graded unfolding model (GUM) is a generalization of the hyperbolic cosine model for binary data of D. Andrich and G. Luo (1993). Applicability of the GUM to attitude testing is illustrated with real data. (SLD)
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
2007-01-01
The validation of cognitive attributes required for correct answers on binary test items or tasks has been addressed in previous research through the integration of cognitive psychology and psychometric models using parametric or nonparametric item response theory, latent class modeling, and Bayesian modeling. All previous models, each with their…
KOI-3278: a self-lensing binary star system.
Kruse, Ethan; Agol, Eric
2014-04-18
Over 40% of Sun-like stars are bound in binary or multistar systems. Stellar remnants in edge-on binary systems can gravitationally magnify their companions, as predicted 40 years ago. By using data from the Kepler spacecraft, we report the detection of such a "self-lensing" system, in which a 5-hour pulse of 0.1% amplitude occurs every orbital period. The white dwarf stellar remnant and its Sun-like companion orbit one another every 88.18 days, a long period for a white dwarf-eclipsing binary. By modeling the pulse as gravitational magnification (microlensing) along with Kepler's laws and stellar models, we constrain the mass of the white dwarf to be ~63% of the mass of our Sun. Further study of this system, and any others discovered like it, will help to constrain the physics of white dwarfs and binary star evolution.
The Eclipsing Central Stars of the Planetary Nebulae Lo 16 and PHR J1040-5417
NASA Astrophysics Data System (ADS)
Hillwig, Todd C.; Frew, David; Jones, David; Crispo, Danielle
2017-01-01
Binary central stars of planetary nebula are a valuable tool in understanding common envelope evolution. In these cases both the resulting close binary system and the expanding envelope (the planetary nebula) can be studied directly. In order to compare observed systems with common envelope evolution models we need to determine precise physical parameters of the binaries and the nebulae. Eclipsing central stars provide us with the best opportunity to determine high precision values for mass, radius, and temperature of the component stars in these close binaries. We present photometry and spectroscopy for two of these eclipsing systems; the central stars of Lo 16 and PHR 1040-5417. Using light curves and radial velocity curves along with binary modeling we provide physical parameters for the stars in both of these systems.
Population of Nuclei Via 7Li-Induced Binary Reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Rodney M.; Phair, Larry W.; Descovich, M.
2005-08-08
The authors have investigated the population of nuclei formed in binary reactions involving {sup 7}Li beams on targets of {sup 160}Gd and {sup 184}W. The {sup 7}Li + {sup 184}W data were taken in the first experiment using the LIBERACE Ge-array in combination with the STARS Si {Delta}E-E telescope system at the 88-Inch Cyclotron of the Lawrence Berkeley National Laboratory. By using the Wilczynski binary transfer model, in combination with a standard evaporation model, they are able to reproduce the experimental results. This is a useful method for predicting the population of neutron-rich heavy nuclei formed in binary reactions involvingmore » beams of weakly bound nuclei formed in binary reactions involving beams of weakly bound nuclei and will be of use in future spectroscopic studies.« less
Calculating Mass Diffusion in High-Pressure Binary Fluids
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2004-01-01
A comprehensive mathematical model of mass diffusion has been developed for binary fluids at high pressures, including critical and supercritical pressures. Heretofore, diverse expressions, valid for limited parameter ranges, have been used to correlate high-pressure binary mass-diffusion-coefficient data. This model will likely be especially useful in the computational simulation and analysis of combustion phenomena in diesel engines, gas turbines, and liquid rocket engines, wherein mass diffusion at high pressure plays a major role.
Coevolution of Binaries and Circumbinary Gaseous Disks
NASA Astrophysics Data System (ADS)
Fleming, David; Quinn, Thomas R.
2018-04-01
The recent discoveries of circumbinary planets by Kepler raise questions for contemporary planet formation models. Understanding how these planets form requires characterizing their formation environment, the circumbinary protoplanetary disk, and how the disk and binary interact. The central binary excites resonances in the surrounding protoplanetary disk that drive evolution in both the binary orbital elements and in the disk. To probe how these interactions impact both binary eccentricity and disk structure evolution, we ran N-body smooth particle hydrodynamics (SPH) simulations of gaseous protoplanetary disks surrounding binaries based on Kepler 38 for 10^4 binary orbital periods for several initial binary eccentricities. We find that nearly circular binaries weakly couple to the disk via a parametric instability and excite disk eccentricity growth. Eccentric binaries strongly couple to the disk causing eccentricity growth for both the disk and binary. Disks around sufficiently eccentric binaries strongly couple to the disk and develop an m = 1 spiral wave launched from the 1:3 eccentric outer Lindblad resonance (EOLR). This wave corresponds to an alignment of gas particle longitude of periastrons. We find that in all simulations, the binary semi-major axis decays due to dissipation from the viscous disk.
A simple measure of cognitive reserve is relevant for cognitive performance in MS patients.
Della Corte, Marida; Santangelo, Gabriella; Bisecco, Alvino; Sacco, Rosaria; Siciliano, Mattia; d'Ambrosio, Alessandro; Docimo, Renato; Cuomo, Teresa; Lavorgna, Luigi; Bonavita, Simona; Tedeschi, Gioacchino; Gallo, Antonio
2018-05-04
Cognitive reserve (CR) contributes to preserve cognition despite brain damage. This theory has been applied to multiple sclerosis (MS) to explain the partial relationship between cognition and MRI markers of brain pathology. Our aim was to determine the relationship between two measures of CR and cognition in MS. One hundred and forty-seven MS patients were enrolled. Cognition was assessed using the Rao's Brief Repeatable Battery and the Stroop Test. CR was measured as the vocabulary subtest of the WAIS-R score (VOC) and the number of years of formal education (EDU). Regression analysis included raw score data on each neuropsychological (NP) test as dependent variables and demographic/clinical parameters, VOC, and EDU as independent predictors. A binary logistic regression analysis including clinical/CR parameters as covariates and absence/presence of cognitive deficits as dependent variables was performed too. VOC, but not EDU, was strongly correlated with performances at all ten NP tests. EDU was correlated with executive performances. The binary logistic regression showed that only the Expanded Disability Status Scale (EDSS) and VOC were independently correlated with the presence/absence of CD. The lower the VOC and/or the higher the EDSS, the higher the frequency of CD. In conclusion, our study supports the relevance of CR in subtending cognitive performances and the presence of CD in MS patients.
Figueroa, Jennifer A; Mansoor, Jim K; Allen, Roblee P; Davis, Cristina E; Walby, William F; Aksenov, Alexander A; Zhao, Weixiang; Lewis, William R; Schelegle, Edward S
2015-04-20
With ascent to altitude, certain individuals are susceptible to high altitude pulmonary edema (HAPE), which in turn can cause disability and even death. The ability to identify individuals at risk of HAPE prior to ascent is poor. The present study examined the profile of volatile organic compounds (VOC) in exhaled breath condensate (EBC) and pulmonary artery systolic pressures (PASP) before and after exposure to normobaric hypoxia (12% O2) in healthy males with and without a history of HAPE (Hx HAPE, n = 5; Control, n = 11). In addition, hypoxic ventilatory response (HVR), and PASP response to normoxic exercise were also measured. Auto-regression/partial least square regression of whole gas chromatography/mass spectrometry (GC/MS) data and binary logistic regression (BLR) of individual GC peaks and physiologic parameters resulted in models that separate individual subjects into their groups with variable success. The result of BLR analysis highlights HVR, PASP response to hypoxia and the amount of benzyl alcohol and dimethylbenzaldehyde dimethyl in expired breath as markers of HAPE history. These findings indicate the utility of EBC VOC analysis to discriminate between individuals with and without a history of HAPE and identified potential novel biomarkers that correlated with physiological responses to hypoxia.
Adiabatic Mass Loss Model in Binary Stars
NASA Astrophysics Data System (ADS)
Ge, H. W.
2012-07-01
Rapid mass transfer process in the interacting binary systems is very complicated. It relates to two basic problems in the binary star evolution, i.e., the dynamically unstable Roche-lobe overflow and the common envelope evolution. Both of the problems are very important and difficult to be modeled. In this PhD thesis, we focus on the rapid mass loss process of the donor in interacting binary systems. The application to the criterion of dynamically unstable mass transfer and the common envelope evolution are also included. Our results based on the adiabatic mass loss model could be used to improve the binary evolution theory, the binary population synthetic method, and other related aspects. We build up the adiabatic mass loss model. In this model, two approximations are included. The first one is that the energy generation and heat flow through the stellar interior can be neglected, hence the restructuring is adiabatic. The second one is that he stellar interior remains in hydrostatic equilibrium. We model this response by constructing model sequences, beginning with a donor star filling its Roche lobe at an arbitrary point in its evolution, holding its specific entropy and composition profiles fixed. These approximations are validated by the comparison with the time-dependent binary mass transfer calculations and the polytropic model for low mass zero-age main-sequence stars. In the dynamical time scale mass transfer, the adiabatic response of the donor star drives it to expand beyond its Roche lobe, leading to runaway mass transfer and the formation of a common envelope with its companion star. For donor stars with surface convection zones of any significant depth, this runaway condition is encountered early in mass transfer, if at all; but for main sequence stars with radiative envelopes, it may be encountered after a prolonged phase of thermal time scale mass transfer, so-called delayed dynamical instability. We identify the critical binary mass ratio for the onset of dynamical time scale mass transfer; if the ratio of donor to accretor masses exceeds this critical value, the dynamical time scale mass transfer ensues. The grid of criterion for all stars can be used to be the basic input as the binary population synthetic method, which will be improved absolutely. In common envelope evolution, the dissipation of orbital energy of the binary provides the energy to eject the common envelope; the energy budget for this process essentially consists of the initial orbital energy of the binary and the initial binding energies of the binary components. We emphasize that, because stellar core and envelope contribute mutually to each other's gravitational potential energy, proper evaluation of the total energy of a star requires integration over the entire stellar interior, not the ejected envelope alone as commonly assumed. We show that the change in total energy of the donor star, as a function of its remaining mass along an adiabatic mass-loss sequence, can be calculated. This change in total energy of the donor star, combined with the requirement that both remnant donor and its companion star fit within their respective Roche lobes, then circumscribes energetically possible survivors of common envelope evolution. It is the first time that we can calculate the accurate total energy of the donor star in common envelope evolution, while the results with the old method are inconsistent with observations.
Risk prediction for myocardial infarction via generalized functional regression models.
Ieva, Francesca; Paganoni, Anna M
2016-08-01
In this paper, we propose a generalized functional linear regression model for a binary outcome indicating the presence/absence of a cardiac disease with multivariate functional data among the relevant predictors. In particular, the motivating aim is the analysis of electrocardiographic traces of patients whose pre-hospital electrocardiogram (ECG) has been sent to 118 Dispatch Center of Milan (the Italian free-toll number for emergencies) by life support personnel of the basic rescue units. The statistical analysis starts with a preprocessing of ECGs treated as multivariate functional data. The signals are reconstructed from noisy observations. The biological variability is then removed by a nonlinear registration procedure based on landmarks. Thus, in order to perform a data-driven dimensional reduction, a multivariate functional principal component analysis is carried out on the variance-covariance matrix of the reconstructed and registered ECGs and their first derivatives. We use the scores of the Principal Components decomposition as covariates in a generalized linear model to predict the presence of the disease in a new patient. Hence, a new semi-automatic diagnostic procedure is proposed to estimate the risk of infarction (in the case of interest, the probability of being affected by Left Bundle Brunch Block). The performance of this classification method is evaluated and compared with other methods proposed in literature. Finally, the robustness of the procedure is checked via leave-j-out techniques. © The Author(s) 2013.
Burdette, Amy M; Haynes, Stacy H; Hill, Terrence D; Bartkowski, John P
2014-06-01
In this paper, we examine associations among personal religiosity, perceived infertility, and inconsistent contraceptive use among unmarried young adults (ages 18-29). The data for this investigation came from the National Survey of Reproductive and Contraceptive Knowledge (n = 1,695). We used multinomial logistic regression to model perceived infertility, adjusted probabilities to model rationales for perceived infertility, and binary logistic regression to model inconsistent contraceptive use. Evangelical Protestants were more likely than non-affiliates to believe that they were infertile. Among the young women who indicated some likelihood of infertility, evangelical Protestants were also more likely than their other Protestant or non-Christian faith counterparts to believe that they were infertile because they had unprotected sex without becoming pregnant. Although evangelical Protestants were more likely to exhibit inconsistent contraception use than non-affiliates, we were unable to attribute any portion of this difference to infertility perceptions. Whereas most studies of religion and health emphasize the salubrious role of personal religiosity, our results suggest that evangelical Protestants may be especially likely to hold misconceptions about their fertility. Because these misconceptions fail to explain higher rates of inconsistent contraception use among evangelical Protestants, additional research is needed to understand the principles and motives of this unique religious community. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Asquith, William H.; Roussel, Meghan C.
2007-01-01
Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is limited to a previously described, watershed-specific, gamma distribution model of the unit hydrograph. In particular, the initial-abstraction, constant-loss model is tuned to the gamma distribution model of the unit hydrograph. A complex computational analysis of observed rainfall and runoff for the 92 watersheds was done to determine, by storm, optimal values of initial abstraction and constant loss. Optimal parameter values for a given storm were defined as those values that produced a modeled runoff hydrograph with volume equal to the observed runoff hydrograph and also minimized the residual sum of squares of the two hydrographs. Subsequently, the means of the optimal parameters were computed on a watershed-specific basis. These means for each watershed are considered the most representative, are tabulated, and are used in further statistical analyses. Statistical analyses of watershed-specific, initial abstraction and constant loss include documentation of the distribution of each parameter using the generalized lambda distribution. The analyses show that watershed development has substantial influence on initial abstraction and limited influence on constant loss. The means and medians of the 92 watershed-specific parameters are tabulated with respect to watershed development; although they have considerable uncertainty, these parameters can be used for parameter prediction for ungaged watersheds. The statistical analyses of watershed-specific, initial abstraction and constant loss also include development of predictive procedures for estimation of each parameter for ungaged watersheds. Both regression equations and regression trees for estimation of initial abstraction and constant loss are provided. The watershed characteristics included in the regression analyses are (1) main-channel length, (2) a binary factor representing watershed development, (3) a binary factor representing watersheds with an abundance of rocky and thin-soiled terrain, and (4) curve numb
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Currency Arbitrage Detection Using a Binary Integer Programming Model
ERIC Educational Resources Information Center
Soon, Wanmei; Ye, Heng-Qing
2011-01-01
In this article, we examine the use of a new binary integer programming (BIP) model to detect arbitrage opportunities in currency exchanges. This model showcases an excellent application of mathematics to the real world. The concepts involved are easily accessible to undergraduate students with basic knowledge in Operations Research. Through this…
The Binary System Laboratory Activities Based on Students Mental Model
NASA Astrophysics Data System (ADS)
Albaiti, A.; Liliasari, S.; Sumarna, O.; Martoprawiro, M. A.
2017-09-01
Generic science skills (GSS) are required to develop student conception in learning binary system. The aim of this research was to know the improvement of students GSS through the binary system labotoratory activities based on their mental model using hypothetical-deductive learning cycle. It was a mixed methods embedded experimental model research design. This research involved 15 students of a university in Papua, Indonesia. Essay test of 7 items was used to analyze the improvement of students GSS. Each items was designed to interconnect macroscopic, sub-microscopic and symbolic levels. Students worksheet was used to explore students mental model during investigation in laboratory. The increase of students GSS could be seen in their N-Gain of each GSS indicators. The results were then analyzed descriptively. Students mental model and GSS have been improved from this study. They were interconnect macroscopic and symbolic levels to explain binary systems phenomena. Furthermore, they reconstructed their mental model with interconnecting the three levels of representation in Physical Chemistry. It necessary to integrate the Physical Chemistry Laboratory into a Physical Chemistry course for effectiveness and efficiency.
A GDP-driven model for the binary and weighted structure of the International Trade Network
NASA Astrophysics Data System (ADS)
Almog, Assaf; Squartini, Tiziano; Garlaschelli, Diego
2015-01-01
Recent events such as the global financial crisis have renewed the interest in the topic of economic networks. One of the main channels of shock propagation among countries is the International Trade Network (ITN). Two important models for the ITN structure, the classical gravity model of trade (more popular among economists) and the fitness model (more popular among networks scientists), are both limited to the characterization of only one representation of the ITN. The gravity model satisfactorily predicts the volume of trade between connected countries, but cannot reproduce the missing links (i.e. the topology). On the other hand, the fitness model can successfully replicate the topology of the ITN, but cannot predict the volumes. This paper tries to make an important step forward in the unification of those two frameworks, by proposing a new gross domestic product (GDP) driven model which can simultaneously reproduce the binary and the weighted properties of the ITN. Specifically, we adopt a maximum-entropy approach where both the degree and the strength of each node are preserved. We then identify strong nonlinear relationships between the GDP and the parameters of the model. This ultimately results in a weighted generalization of the fitness model of trade, where the GDP plays the role of a ‘macroeconomic fitness’ shaping the binary and the weighted structure of the ITN simultaneously. Our model mathematically explains an important asymmetry in the role of binary and weighted network properties, namely the fact that binary properties can be inferred without the knowledge of weighted ones, while the opposite is not true.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
Dynamical evolution of a fictitious population of binary Neptune Trojans
NASA Astrophysics Data System (ADS)
Brunini, Adrián
2018-03-01
We present numerical simulations of the evolution of a synthetic population of Binary Neptune Trojans, under the influence of the solar perturbations and tidal friction (the so-called Kozai cycles and tidal friction evolution). Our model includes the dynamical influence of the four giant planets on the heliocentric orbit of the binary centre of mass. In this paper, we explore the evolution of initially tight binaries around the Neptune L4 Lagrange point. We found that the variation of the heliocentric orbital elements due to the libration around the Lagrange point introduces significant changes in the orbital evolution of the binaries. Collisional processes would not play a significant role in the dynamical evolution of Neptune Trojans. After 4.5 × 109 yr of evolution, ˜50 per cent of the synthetic systems end up separated as single objects, most of them with slow diurnal rotation rate. The final orbital distribution of the surviving binary systems is statistically similar to the one found for Kuiper Belt Binaries when collisional evolution is not included in the model. Systems composed by a primary and a small satellite are more fragile than the ones composed by components of similar sizes.
Drawing Nomograms with R: applications to categorical outcome and survival data.
Zhang, Zhongheng; Kattan, Michael W
2017-05-01
Outcome prediction is a major task in clinical medicine. The standard approach to this work is to collect a variety of predictors and build a model of appropriate type. The model is a mathematical equation that connects the outcome of interest with the predictors. A new patient with given clinical characteristics can be predicted for outcome with this model. However, the equation describing the relationship between predictors and outcome is often complex and the computation requires software for practical use. There is another method called nomogram which is a graphical calculating device allowing an approximate graphical computation of a mathematical function. In this article, we describe how to draw nomograms for various outcomes with nomogram() function. Binary outcome is fit by logistic regression model and the outcome of interest is the probability of the event of interest. Ordinal outcome variable is also discussed. Survival analysis can be fit with parametric model to fully describe the distributions of survival time. Statistics such as the median survival time, survival probability up to a specific time point are taken as the outcome of interest.
Marchetti, Luca; Manca, Vincenzo
2015-04-15
MpTheory Java library is an open-source project collecting a set of objects and algorithms for modeling observed dynamics by means of the Metabolic P (MP) theory, that is, a mathematical theory introduced in 2004 for modeling biological dynamics. By means of the library, it is possible to model biological systems both at continuous and at discrete time. Moreover, the library comprises a set of regression algorithms for inferring MP models starting from time series of observations. To enhance the modeling experience, beside a pure Java usage, the library can be directly used within the most popular computing environments, such as MATLAB, GNU Octave, Mathematica and R. The library is open-source and licensed under the GNU Lesser General Public License (LGPL) Version 3.0. Source code, binaries and complete documentation are available at http://mptheory.scienze.univr.it. luca.marchetti@univr.it, marchetti@cosbi.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Frolov, Alexander Vladimirovich; Vaikhanskaya, Tatjana Gennadjevna; Melnikova, Olga Petrovna; Vorobiev, Anatoly Pavlovich; Guel, Ludmila Michajlovna
2017-01-01
The development of prognostic factors of life-threatening ventricular tachyarrhythmias (VTA) and sudden cardiac death (SCD) continues to maintain its priority and relevance in cardiology. The development of a method of personalised prognosis based on multifactorial analysis of the risk factors associated with life-threatening heart rhythm disturbances is considered a key research and clinical task. To design a prognostic and mathematical model to define personalised risk for life-threatening VTA in patients with chronic heart failure (CHF). The study included 240 patients with CHF (mean-age of 50.5 ± 12.1 years; left ventricular ejection fraction 32.8 ± 10.9%; follow-up period 36.8 ± 5.7 months). The participants received basic therapy for heart failure. The elec-trocardiogram (ECG) markers of myocardial electrical instability were assessed including microvolt T-wave alternans, heart rate turbulence, heart rate deceleration, and QT dispersion. Additionally, echocardiography and Holter monitoring (HM) were performed. The cardiovascular events were considered as primary endpoints, including SCD, paroxysmal ventricular tachycardia/ventricular fibrillation (VT/VF) based on HM-ECG data, and data obtained from implantable device interrogation (CRT-D, ICD) as well as appropriated shocks. During the follow-up period, 66 (27.5%) subjects with CHF showed adverse arrhythmic events, including nine SCD events and 57 VTAs. Data from a stepwise discriminant analysis of cumulative ECG-markers of myocardial electrical instability were used to make a mathematical model of preliminary VTA risk stratification. Uni- and multivariate Cox logistic regression analysis were performed to define an individualised risk stratification model of SCD/VTA. A binary logistic regression model demonstrated a high prognostic significance of discriminant function with a classification sensitivity of 80.8% and specificity of 99.1% (F = 31.2; c2 = 143.2; p < 0.0001). The method of personalised risk stratification using Cox logistic regression allows correct classification of more than 93.9% of CHF cases. A robust body of evidence concerning logistic regression prognostic significance to define VTA risk allows inclusion of this method into the algorithm of subsequent control and selection of the optimal treatment modality to treat patients with CHF.
A globally accurate theory for a class of binary mixture models
NASA Astrophysics Data System (ADS)
Dickman, Adriana G.; Stell, G.
The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.
Theoretical studies of binaries in astrophysics
NASA Astrophysics Data System (ADS)
Dischler, Johann Sebastian
This thesis introduces and summarizes four papers dealing with computer simulations of astrophysical processes involving binaries. The first part gives the rational and theoretical background to these papers. In paper I and II a statistical approach to studying eclipsing binaries is described. By using population synthesis models for binaries the probabilities for eclipses are calculated for different luminosity classes of binaries. These are compared with Hipparcos data and they agree well if one uses a standard input distribution for the orbit sizes. If one uses a random pairing model, where both companions are independently picked from an IMF, one finds too feclipsing binaries by an order of magnitude. In paper III we investigate a possible scenario for the origin of the stars observed close to the centre of our galaxy, called S stars. We propose that a cluster falls radially cowards the central black hole. The binaries within the cluster can then, if they have small impact parameters, be broken up by the black hole's tidal held and one of the components of the binary will be captured by the black hole. Paper IV investigates how the onset of mass transfer in eccentric binaries depends on the eccentricity. To do this we have developed a new two-phase SPH scheme where very light particles are at tire outer edge of our simulated star. This enables us to get a much better resolution of the very small mass that is transferred in close binaries. Our simulations show that the minimum required distance between the stars to have mass transfer decreases with the eccentricity.
The association between second-hand smoke exposure and depressive symptoms among pregnant women.
Huang, Jingya; Wen, Guoming; Yang, Weikang; Yao, Zhenjiang; Wu, Chuan'an; Ye, Xiaohua
2017-10-01
Tobacco smoking and depression are strongly associated, but the possible association between second-hand smoke (SHS) exposure and depression is unclear. This study aimed to examine the possible relation between SHS exposure and depressive symptoms among pregnant women. A cross-sectional survey was conducted in Shenzhen, China, using a multistage sampling method. The univariable and multivariable logistic regression models were used to explore the associations between SHS exposure and depressive symptoms. Among 2176 pregnant women, 10.5% and 2.0% were classified as having probable and severe depressive symptoms. Both binary and multinomial logistic regression revealed that there were significantly increased risks of severe depressive symptoms corresponding to SHS exposure in homes or regular SHS exposure in workplaces using no exposure as reference. In addition, greater frequency of SHS exposure was significantly associated with the increased risk of severe depressive symptoms. Our findings suggest that SHS exposure is positively associated with depressive symptoms in a dose-response manner among the pregnant women. Copyright © 2017 Elsevier B.V. All rights reserved.
Pfoertner, Timo-Kolja; Andress, Hans-Juergen; Janssen, Christian
2011-08-01
Current study introduces the living standard concept as an alternative approach of measuring poverty and compares its explanatory power to an income-based poverty measure with regard to subjective health status of the German population. Analyses are based on the German Socio-Economic Panel (2001, 2003 and 2005) and refer to binary logistic regressions of poor subjective health status with regard to each poverty condition, their duration and their causal influence from a previous time point. To calculate the discriminate power of both poverty indicators, initially the indicators were considered separately in regression models and subsequently, both were included simultaneously. The analyses reveal a stronger poverty-health relationship for the living standard indicator. An inadequate living standard in 2005, longer spells of an inadequate living standard between 2001, 2003 and 2005 as well as an inadequate living standard at a previous time point is significantly strongly associated with subjective health than income poverty. Our results challenge conventional measurements of the relationship between poverty and health that probably has been underestimated by income measures so far.
Predictors of Gender Inequalities in the Rank of Full Professor
ERIC Educational Resources Information Center
Heijstra, Thamar; Bjarnason, Thoroddur; Rafnsdóttir, Gudbjörg Linda
2015-01-01
This article examines whether age, work-related, and family-related predictors explain differences in the academic advancement of women and men in Iceland. Survey data were analyzed by binary logistic regression. The findings put that women climb the academic career ladder at a slower pace than men. This finding puts one of the widely known…
South Texas Mexican American Use of Traditional Folk and Mainstream Alternative Therapies
ERIC Educational Resources Information Center
Martinez, Leslie N.
2009-01-01
A telephone survey was conducted with a large sample of Mexican Americans from border (n = 1,001) and nonborder (n = 1,030) regions in Texas. Patterns of traditional folk and mainstream complementary and alternative medicine (CAM) use were analyzed with two binary logistic regressions, using gender, self-rated health, confidence in medical…
Propensity of University Students in the Region of Antofagasta, Chile to Create Enterprise
ERIC Educational Resources Information Center
Romani, Gianni; Didonet, Simone; Contuliano, Sue-Hellen; Portilla, Rodrigo
2013-01-01
The authors aim to discuss the propensity or intention to create enterprise among university students in the region of Antofagasta, Chile, and to analyze the factors that influence the step from desire to intention. 681 students were surveyed. The data were analyzed by binary logistical regression. The results show that curriculum is among the…
Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo
2018-05-10
Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies.
Stability of binaries. Part 1: Rigid binaries
NASA Astrophysics Data System (ADS)
Sharma, Ishan
2015-09-01
We consider the stability of binary asteroids whose members are possibly granular aggregates held together by self-gravity alone. A binary is said to be stable whenever each member is orbitally and structurally stable to both orbital and structural perturbations. To this end, we extend the stability test for rotating granular aggregates introduced by Sharma (Sharma, I. [2012]. J. Fluid Mech., 708, 71-99; Sharma, I. [2013]. Icarus, 223, 367-382; Sharma, I. [2014]. Icarus, 229, 278-294) to the case of binary systems comprised of rubble members. In part I, we specialize to the case of a binary with rigid members subjected to full three-dimensional perturbations. Finally, we employ the stability test to critically appraise shape models of four suspected binary systems, viz., 216 Kleopatra, 25143 Itokawa, 624 Hektor and 90 Antiope.
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
Scobie, Andrea
2011-04-01
To identify risk factors associated with self-reported medical, medication and laboratory error in eight countries. The Commonwealth Fund's 2008 International Health Policy Survey of chronically ill patients in eight countries. None. A multi-country telephone survey was conducted between 3 March and 30 May 2008 with patients in Australia, Canada, France, Germany, the Netherlands, New Zealand, the UK and the USA who self-reported being chronically ill. A bivariate analysis was performed to determine significant explanatory variables of medical, medication and laboratory error (P < 0.01) for inclusion in a binary logistic regression model. The final regression model included eight risk factors for self-reported error: age 65 and under, education level of some college or less, presence of two or more chronic conditions, high prescription drug use (four+ drugs), four or more doctors seen within 2 years, a care coordination problem, poor doctor-patient communication and use of an emergency department. Risk factors with the greatest ability to predict experiencing an error encompassed issues with coordination of care and provider knowledge of a patient's medical history. The identification of these risk factors could help policymakers and organizations to proactively reduce the likelihood of error through greater examination of system- and organization-level practices.
Tay, Richard
2016-03-01
The binary logistic model has been extensively used to analyze traffic collision and injury data where the outcome of interest has two categories. However, the assumption of a symmetric distribution may not be a desirable property in some cases, especially when there is a significant imbalance in the two categories of outcome. This study compares the standard binary logistic model with the skewed logistic model in two cases in which the symmetry assumption is violated in one but not the other case. The differences in the estimates, and thus the marginal effects obtained, are significant when the assumption of symmetry is violated. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimation of the Viscosities of Liquid Sn-Based Binary Lead-Free Solder Alloys
NASA Astrophysics Data System (ADS)
Wu, Min; Li, Jinquan
2018-01-01
The viscosity of a binary Sn-based lead-free solder alloy was calculated by combining the predicted model with the Miedema model. The viscosity factor was proposed and the relationship between the viscosity and surface tension was analyzed as well. The investigation result shows that the viscosity of Sn-based lead-free solders predicted from the predicted model shows excellent agreement with the reported values. The viscosity factor is determined by three physical parameters: atomic volume, electronic density, and electro-negativity. In addition, the apparent correlation between the surface tension and viscosity of the binary Sn-based Pb-free solder was obtained based on the predicted model.
Binary Lenses in OGLE-III EWS Database. Seasons 2002-2003
NASA Astrophysics Data System (ADS)
Jaroszynski, M.; Udalski, A.; Kubiak, M.; Szymanski, M.; Pietrzynski, G.; Soszynski, I.; Zebrun, K.; Szewczyk, O.; Wyrzykowski, L.
2004-06-01
We present 15 binary lens candidates from OGLE-III Early Warning System database for seasons 2002-2003. We also found 15 events interpreted as single mass lensing of double sources. The candidates were selected by visual light curves inspection. Examining the models of binary lenses of this and our previous study (10 caustic crossing events of OGLE-II seasons 1997--1999) we find one case of extreme mass ratio binary (q approx 0.005) and the rest in the range 0.1
Testing Ultracool Models with Precise Luminosities and Masses
NASA Astrophysics Data System (ADS)
Dupuy, Trent; Cushing, Michael; Liu, Michael; Burningham, Ben; Leggett, Sandy; Albert, Loic; Delorme, Philippe
2011-05-01
After years of patient orbital monitoring, there is a growing sample of brown dwarfs with well-determined dynamical masses, representing the gold standard for testing substellar models. A key element of our model tests to date has been the use of integrated-light photometry to provide accurate total luminosity measurements for these binaries. However, some of the ultracool binaries with the most promising orbit motion for yielding dynamical in the masses lack the mid-infrared photometry needed to constrain their SEDs. This is especially crucial for the latest type binaries (spectral types >T5) that will probe the coldest temperature regimes previously untested with dynamical masses. We propose to use IRAC to obtain the needed mid-infrared photometry for a sample of binaries that are part of our ongoing orbital monitoring program with Keck laser guide star adaptive optics. The observational effort needed to characterize these binaries' luminosities using Spitzer is much less daunting in than the years of orbital monitoring needed to measure precise dynamical masses, but it is equally vital for robust tests of theory.
Einstein observations of selected close binaries and shell stars
NASA Technical Reports Server (NTRS)
Guinan, E. F.; Koch, R. H.; Plavec, M. J.
1984-01-01
Several evolved close binaries and shell stars were observed with the IPC aboard the HEAO 2 Einstein Observatory. No eclipsing target was detected, and only two of the shell binaries were detected. It is argued that there is no substantial difference in L(X) for eclipsing and non-eclipsing binaries. The close binary and shell star CX Dra was detected as a moderately strong source, and the best interpretation is that the X-ray flux arises primarily from the corona of the cool member of the binary at about the level of Algol-like or RS CVn-type sources. The residual visible-band light curve of this binary has been modeled so as to conform as well as possible with this interpretation. HD 51480 was detected as a weak source. Substantial background information from IUE and ground scanner measurements are given for this binary. The positions and flux values of several accidentally detected sources are given.
Mapping quantitative trait loci for binary trait in the F2:3 design.
Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang
2008-12-01
In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.
Aldars-García, Laila; Berman, María; Ortiz, Jordi; Ramos, Antonio J; Marín, Sonia
2018-06-01
The probability of growth and aflatoxin B 1 (AFB 1 ) production of 20 isolates of Aspergillus flavus were studied using a full factorial design with eight water activity levels (0.84-0.98 a w ) and six temperature levels (15-40 °C). Binary data obtained from growth studies were modelled using linear logistic regression analysis as a function of temperature, water activity and time for each isolate. In parallel, AFB 1 was extracted at different times from newly formed colonies (up to 20 mm in diameter). Although a total of 950 AFB 1 values over time for all conditions studied were recorded, they were not considered to be enough to build probability models over time, and therefore, only models at 30 days were built. The confidence intervals of the regression coefficients of the probability of growth models showed some differences among the 20 growth models. Further, to assess the growth/no growth and AFB 1 /no- AFB 1 production boundaries, 0.05 and 0.5 probabilities were plotted at 30 days for all of the isolates. The boundaries for growth and AFB 1 showed that, in general, the conditions for growth were wider than those for AFB 1 production. The probability of growth and AFB 1 production seemed to be less variable among isolates than AFB 1 accumulation. Apart from the AFB 1 production probability models, using growth probability models for AFB 1 probability predictions could be, although conservative, a suitable alternative. Predictive mycology should include a number of isolates to generate data to build predictive models and take into account the genetic diversity of the species and thus make predictions as similar as possible to real fungal food contamination. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jović, Ozren; Smrečki, Neven; Popović, Zora
2016-04-01
A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for p<0.05). Also, iRR can be a fast alternative to iPLS, especially in case of unknown degree of complexity of analyzed system, i.e. if upper limit of number of latent variables is not easily estimated for iPLS. Adulteration of hempseed (H) oil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEP<1.2%). This means that FTIR-ATR coupled with iRR can very rapidly and effectively determine the level of adulteration in the adulterated hempseed oil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
Predictors associated with the willingness to take human papilloma virus vaccination.
Naing, Cho; Pereira, Joanne; Abe, Tatsuki; Eh Zhen Wei, Daniel; Rahman Bajera, Ibrizah Binti Abdul; Kavinda Perera, Undugodage Heshan
2012-04-01
Human papilloma virus vaccine is considered to be the primary form of cervical cancer prevention. The objectives were (1) to determine knowledge about, and perception of human papilloma virus infection in relation to cervical cancer, (2) to explore the intention of the community to be vaccinated with human papilloma virus vaccine, and (3) to identify variables that could predict the likelihood of uptake of the vaccine. A cross-sectional survey was carried out in a semi-urban Town of Malaysia, using a pre-tested structured questionnaire. Summary statistics, Pearson chi-square test and a binary logistic regression were used for data analysis. A total of 232 respondents were interviewed. Overall, only a few had good knowledge related to human papilloma virus (14%) or vaccination (8%). Many had misconceptions that it could be transmitted through blood transfusion (57%). Sixty percent had intention to take vaccination. In the binary logistic model, willingness to take vaccination was significant with 'trusts that vaccination would be effective for prevention of cervical cancer' (P = 0.001), 'worries for themselves' (P < 0.001) or 'their family members' (P = 0.003) and 'being Indian ethnicity' (P = 0.024). The model could fairly predict the likelihood of uptake of the vaccine (Cox & Snell R(2) = .415; Nagelkerke R(2) = 0.561). Results indicate that intensive health education dispelling misconception and risk perception towards human papilloma virus infection and cervical cancer would be helpful to increase the acceptability of vaccination program.
Asfaram, Arash; Ghaedi, Mehrorang; Yousefi, Fakhri; Dastkhoon, Mehdi
2016-11-01
The manganese impregnated zinc sulfide nanoparticles deposited on activated carbon (ZnS: Mn-NPs-AC) which fully was synthesized and characterized successfully applied for simultaneous removal of malachite green and methylene blue in binary situation. The effects of variables such as pH (2.0-10.0), sonication time (1-5min), adsorbent mass (0.005-0.025g) and MB and MG concentration (4-20mgL(-1)) on their removal efficiency was studied dy central composite design (CCD) to correlate dyes removal percentage to above mention variables that guides amongst the maximum influence was seen by changing the sonication time and adsorbent mass. Sonication time, adsorbent mass and pH in despite of dyes concentrations has positive relation with removal percentage. Multiple regression analysis of the experimental results is associated with 3-D response surface and contour plots that guide setting condition at pH of 7.0, 3min sonication time, 0.025g Mn: ZnS-NPs-AC and 15mgL(-1) of MB and MG lead to achievement of removal efficiencies of 99.87% and 98.56% for MG and MB, respectively. The pseudo-second-order model as best choice efficiency describe the dyes adsorption behavior, while MG and MB maximum adsorption capacity according to Langmuir was 202.43 and 191.57mgg(-1). Copyright © 2016 Elsevier B.V. All rights reserved.
Pu, Jie; Fang, Di; Wilson, Jeffrey R
2017-02-03
The analysis of correlated binary data is commonly addressed through the use of conditional models with random effects included in the systematic component as opposed to generalized estimating equations (GEE) models that addressed the random component. Since the joint distribution of the observations is usually unknown, the conditional distribution is a natural approach. Our objective was to compare the fit of different binary models for correlated data in Tabaco use. We advocate that the joint modeling of the mean and dispersion may be at times just as adequate. We assessed the ability of these models to account for the intraclass correlation. In so doing, we concentrated on fitting logistic regression models to address smoking behaviors. Frequentist and Bayes' hierarchical models were used to predict conditional probabilities, and the joint modeling (GLM and GAM) models were used to predict marginal probabilities. These models were fitted to National Longitudinal Study of Adolescent to Adult Health (Add Health) data for Tabaco use. We found that people were less likely to smoke if they had higher income, high school or higher education and religious. Individuals were more likely to smoke if they had abused drug or alcohol, spent more time on TV and video games, and been arrested. Moreover, individuals who drank alcohol early in life were more likely to be a regular smoker. Children who experienced mistreatment from their parents were more likely to use Tabaco regularly. The joint modeling of the mean and dispersion models offered a flexible and meaningful method of addressing the intraclass correlation. They do not require one to identify random effects nor distinguish from one level of the hierarchy to the other. Moreover, once one can identify the significant random effects, one can obtain similar results to the random coefficient models. We found that the set of marginal models accounting for extravariation through the additional dispersion submodel produced similar results with regards to inferences and predictions. Moreover, both marginal and conditional models demonstrated similar predictive power.
Bakhtiyari, Mahmood; Mehmandar, Mohammad Reza; Mirbagheri, Babak; Hariri, Gholam Reza; Delpisheh, Ali; Soori, Hamid
2014-01-01
Risk factors of human-related traffic crashes are the most important and preventable challenges for community health due to their noteworthy burden in developing countries in particular. The present study aims to investigate the role of human risk factors of road traffic crashes in Iran. Through a cross-sectional study using the COM 114 data collection forms, the police records of almost 600,000 crashes occurred in 2010 are investigated. The binary logistic regression and proportional odds regression models are used. The odds ratio for each risk factor is calculated. These models are adjusted for known confounding factors including age, sex and driving time. The traffic crash reports of 537,688 men (90.8%) and 54,480 women (9.2%) are analysed. The mean age is 34.1 ± 14 years. Not maintaining eyes on the road (53.7%) and losing control of the vehicle (21.4%) are the main causes of drivers' deaths in traffic crashes within cities. Not maintaining eyes on the road is also the most frequent human risk factor for road traffic crashes out of cities. Sudden lane excursion (OR = 9.9, 95% CI: 8.2-11.9) and seat belt non-compliance (OR = 8.7, CI: 6.7-10.1), exceeding authorised speed (OR = 17.9, CI: 12.7-25.1) and exceeding safe speed (OR = 9.7, CI: 7.2-13.2) are the most significant human risk factors for traffic crashes in Iran. The high mortality rate of 39 people for every 100,000 population emphasises on the importance of traffic crashes in Iran. Considering the important role of human risk factors in traffic crashes, struggling efforts are required to control dangerous driving behaviours such as exceeding speed, illegal overtaking and not maintaining eyes on the road.
Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.
Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava
2016-09-01
Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.
Asymmetric Planetary Nebulae VI: the conference summary
NASA Astrophysics Data System (ADS)
De Marco, O.
2014-04-01
The Asymmetric Planetary Nebulae conference series, now in its sixth edition, aims to resolve the shaping mechanism of PN. Eighty percent of PN have non spherical shapes and during this conference the last nails in the coffin of single stars models for non spherical PN have been put. Binary theories abound but observational tests are lagging. The highlight of APN6 has been the arrival of ALMA which allowed us to measure magnetic fields on AGB stars systematically. AGB star halos, with their spiral patterns are now connected to PPN and PN halos. New models give us hope that binary parameters may be decoded from these images. In the post-AGB and pre-PN evolutionary phase the naked post-AGB stars present us with an increasingly curious puzzle as complexity is added to the phenomenologies of objects in transition between the AGB and the central star regimes. Binary central stars continue to be detected, including the first detection of longer period binaries, however a binary fraction is still at large. Hydro models of binary interactions still fail to give us results, if we make an exception for the wider types of binary interactions. More promise is shown by analytical considerations and models driven by simpler, 1D simulations such as those carried out with the code MESA. Large community efforts have given us more homogeneous datasets which will yield results for years to come. Examples are the ChanPlaN and HerPlaNe collaborations that have been working with the Chandra and Herschel space telescopes, respectively. Finally, the new kid in town is the intermediate-luminosity optical transient, a new class of events that may have contributed to forming several peculiar PN and pre-PN.
NASA Astrophysics Data System (ADS)
Almandoz, M. C.; Sancho, M. I.; Blanco, S. E.
2014-01-01
The solvatochromic behavior of sulfamethoxazole (SMX) was investigated using UV-vis spectroscopy and DFT methods in neat and binary solvent mixtures. The spectral shifts of this solute were correlated with the Kamlet and Taft parameters (α, β and π*). Multiple lineal regression analysis indicates that both specific hydrogen-bond interaction and non specific dipolar interaction play an important role in the position of the absorption maxima in neat solvents. The simulated absorption spectra using TD-DFT methods were in good agreement with the experimental ones. Binary mixtures consist of cyclohexane (Cy)-ethanol (EtOH), acetonitrile (ACN)-dimethylsulfoxide (DMSO), ACN-dimethylformamide (DMF), and aqueous mixtures containing as co-solvents DMSO, ACN, EtOH and MeOH. Index of preferential solvation was calculated as a function of solvent composition and non-ideal characteristics are observed in all binary mixtures. In ACN-DMSO and ACN-DMF mixtures, the results show that the solvents with higher polarity and hydrogen bond donor ability interact preferentially with the solute. In binary mixtures containing water, the SMX molecules are solvated by the organic co-solvent (DMSO or EtOH) over the whole composition range. Synergistic effect is observed in the case of ACN-H2O and MeOH-H2O, indicating that at certain concentrations solvents interact to form association complexes, which should be more polar than the individual solvents of the mixture.
TEMPORAL CORRELATION OF CLASSIFICATIONS IN REMOTE SENSING
A bivariate binary model is developed for estimating the change in land cover from satellite images obtained at two different times. The binary classifications of a pixel at the two times are modeled as potentially correlated random variables, conditional on the true states of th...
Quick probabilistic binary image matching: changing the rules of the game
NASA Astrophysics Data System (ADS)
Mustafa, Adnan A. Y.
2016-09-01
A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.
Modeling X-ray and gamma-ray emission in the intrabinary shock of pulsar binaries
NASA Astrophysics Data System (ADS)
An, H.
2017-10-01
We present broadband SED and light curve, and a wind interaction model for the gamma-ray binary 1FGL J1018.6-5856 (J1018) which exhibits double peaks in the X-ray light curve. Assuming that the X-ray to low-energy gamma-ray emission is produced by synchrotron radiation and high-energy gamma rays by inverse Compton scattering in the intrabinary shock (IBS), we model the broadband SED and light curve of J1018 using a two-component model having slow electrons in the shock and fast bulk-accelerated electrons at the skin of the shock. The model explains the broadband SED and light curve of J1018 qualitatively well. In particular, modeling the synchrotron emission constrains the orbital geometry. We discuss potential use of the model for other pulsar binaries.
Towards a Fundamental Understanding of Short Period Eclipsing Binary Systems Using Kepler Data
NASA Astrophysics Data System (ADS)
Prsa, Andrej
Kepler's ultra-high precision photometry is revolutionizing stellar astrophysics. We are seeing intrinsic phenomena on an unprecedented scale, and interpreting them is both a challenge and an exciting privilege. Eclipsing binary stars are of particular significance for stellar astrophysics because precise modeling leads to fundamental parameters of the orbiting components: masses, radii, temperatures and luminosities to better than 1-2%. On top of that, eclipsing binaries are ideal physical laboratories for studying other physical phenomena, such as asteroseismic properties, chromospheric activity, proximity effects, mass transfer in close binaries, etc. Because of the eclipses, the basic geometry is well constrained, but a follow-up spectroscopy is required to get the dynamical masses and the absolute scale of the system. A conjunction of Kepler photometry and ground- based spectroscopy is a treasure trove for eclipsing binary star astrophysics. This proposal focuses on a carefully selected set of 100 short period eclipsing binary stars. The fundamental goal of the project is to study the intrinsic astrophysical effects typical of short period binaries in great detail, utilizing Kepler photometry and follow-up spectroscopy to devise a robust and consistent set of modeling results. The complementing spectroscopy is being secured from 3 approved and fully funded programs: the NOAO 4-m echelle spectroscopy at Kitt Peak (30 nights; PI Prsa), the 10- m Hobby-Eberly Telescope high-resolution spectroscopy (PI Mahadevan), and the 2.5-m Sloan Digital Sky Survey III spectroscopy (PI Mahadevan). The targets are prioritized by the projected scientific yield. Short period detached binaries host low-mass (K- and M- type) components for which the mass-radius relationship is sparsely populated and still poorly understood, as the radii appear up to 20% larger than predicted by the population models. We demonstrate the spectroscopic detection viability in the secondary-to-primary light ratio regime of ~1-2% for the circumbinary host system Kepler-16. Semi-detached binaries are ideal targets to study the dynamical processes such as mass flow and accretion, and the associated thermal processes such as intensity variation due to distortion of the lobe-filling component and material inflow collisions with accretion disks. Overcontact binaries are very abundant, yet their evolution and radiative properties are poorly understood and conflicting theories exist to explain their population frequency and structure. In addition, we will measure eclipse timing variations for all program binaries that attest to the presence of perturbing third bodies (stellar and substellar!) or dynamical interaction between the components. By a dedicated, detailed, manual modeling of these sets of targets, we will be able to use Kepler's ultra-high precision photometry to a rewarding scientific end. Thanks to the unprecedented quality of Kepler data, this will be a highly focused effort that maximizes the scientific yield and the reliability of the results. Our team has ample experience dealing with Kepler data (PI Prsa serves as chair of the Eclipsing Binary Working Group in the Kepler Science Team), spectroscopic follow-up (Co-Is Mahadevan and Bender both have experience with radial velocity instrumentation and large spectroscopic surveys), and eclipsing binary modeling (PI Prsa and Co-I Devinney both have a long record of theoretical and computational development of modeling tools). The bulk of funding we are requesting is for two postdoctoral research fellows to conduct this work at 0.5 FTE/year each, for the total of 2 years.
ROTATING STARS AND THE FORMATION OF BIPOLAR PLANETARY NEBULAE. II. TIDAL SPIN-UP
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Segura, G.; Villaver, E.; Manchado, A.
We present new binary stellar evolution models that include the effects of tidal forces, rotation, and magnetic torques with the goal of testing planetary nebulae (PNs) shaping via binary interaction. We explore whether tidal interaction with a companion can spin-up the asymptotic giant brach (AGB) envelope. To do so, we have selected binary systems with main-sequence masses of 2.5 M {sub ⊙} and 0.8 M {sub ⊙} and evolve them allowing initial separations of 5, 6, 7, and 8 au. The binary stellar evolution models have been computed all the way to the PNs formation phase or until Roche lobemore » overflow (RLOF) is reached, whatever happens first. We show that with initial separations of 7 and 8 au, the binary avoids entering into RLOF, and the AGB star reaches moderate rotational velocities at the surface (∼3.5 and ∼2 km s{sup −1}, respectively) during the inter-pulse phases, but after the thermal pulses it drops to a final rotational velocity of only ∼0.03 km s{sup −1}. For the closest binary separations explored, 5 and 6 au, the AGB star reaches rotational velocities of ∼6 and ∼4 km s{sup −1}, respectively, when the RLOF is initiated. We conclude that the detached binary models that avoid entering the RLOF phase during the AGB will not shape bipolar PNs, since the acquired angular momentum is lost via the wind during the last two thermal pulses. This study rules out tidal spin-up in non-contact binaries as a sufficient condition to form bipolar PNs.« less
NASA Astrophysics Data System (ADS)
Oskinova, L. M.; Huenemoerder, D. P.; Hamann, W.-R.; Shenar, T.; Sander, A. A. C.; Ignace, R.; Todt, H.; Hainich, R.
2017-08-01
The blue hypergiant Cyg OB2 12 (B3Ia+) is a representative member of the class of very massive stars in a poorly understood evolutionary stage. We obtained its high-resolution X-ray spectrum using the Chandra observatory. PoWR model atmospheres were calculated to provide realistic wind opacities and to establish the wind density structure. We find that collisional de-excitation is the dominant mechanism depopulating the metastable upper levels of the forbidden lines of the He-like ions Si xiv and Mg xii. Comparison between the model and observations reveals that X-ray emission is produced in a dense plasma, which could reside only at the photosphere or in a colliding wind zone between binary components. The observed X-ray spectra are well-fitted by thermal plasma models, with average temperatures in excess of 10 MK. The wind speed in Cyg OB2 12 is not high enough to power such high temperatures, but the collision of two winds in a binary system can be sufficient. We used archival data to investigate the X-ray properties of other blue hypergiants. In general, stars of this class are not detected as X-ray sources. We suggest that our new Chandra observations of Cyg OB2 12 can be best explained if Cyg OB2 12 is a colliding wind binary possessing a late O-type companion. This makes Cyg OB2 12 only the second binary system among the 16 known Galactic hypergiants. This low binary fraction indicates that the blue hypergiants are likely products of massive binary evolution during which they either accreted a significant amount of mass or already merged with their companions.
NASA Astrophysics Data System (ADS)
Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef
2013-01-01
The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oskinova, L. M.; Hamann, W.-R.; Shenar, T.
The blue hypergiant Cyg OB2 12 (B3Ia{sup +}) is a representative member of the class of very massive stars in a poorly understood evolutionary stage. We obtained its high-resolution X-ray spectrum using the Chandra observatory. PoWR model atmospheres were calculated to provide realistic wind opacities and to establish the wind density structure. We find that collisional de-excitation is the dominant mechanism depopulating the metastable upper levels of the forbidden lines of the He-like ions Si xiv and Mg xii. Comparison between the model and observations reveals that X-ray emission is produced in a dense plasma, which could reside only atmore » the photosphere or in a colliding wind zone between binary components. The observed X-ray spectra are well-fitted by thermal plasma models, with average temperatures in excess of 10 MK. The wind speed in Cyg OB2 12 is not high enough to power such high temperatures, but the collision of two winds in a binary system can be sufficient. We used archival data to investigate the X-ray properties of other blue hypergiants. In general, stars of this class are not detected as X-ray sources. We suggest that our new Chandra observations of Cyg OB2 12 can be best explained if Cyg OB2 12 is a colliding wind binary possessing a late O-type companion. This makes Cyg OB2 12 only the second binary system among the 16 known Galactic hypergiants. This low binary fraction indicates that the blue hypergiants are likely products of massive binary evolution during which they either accreted a significant amount of mass or already merged with their companions.« less
Kosegarten, Carlos E; Ramírez-Corona, Nelly; Mani-López, Emma; Palou, Enrique; López-Malo, Aurelio
2017-01-02
A Box-Behnken design was used to determine the effect of protein concentration (0, 5, or 10g of casein/100g), fat (0, 3, or 6g of corn oil/100g), a w (0.900, 0.945, or 0.990), pH (3.5, 5.0, or 6.5), concentration of cinnamon essential oil (CEO, 0, 200, or 400μL/kg) and incubation temperature (15, 25, or 35°C) on the growth of Aspergillus flavus during 50days of incubation. Mold response under the evaluated conditions was modeled by the modified Gompertz equation, logistic regression, and time-to-detection model. The obtained polynomial regression models allow the significant coefficients (p<0.05) for linear, quadratic and interaction effects for the Gompertz equation's parameters to be identified, which adequately described (R 2 >0.967) the studied mold responses. After 50days of incubation, every tested model system was classified according to the observed response as 1 (growth) or 0 (no growth), then a binary logistic regression was utilized to model A. flavus growth interface, allowing to predict the probability of mold growth under selected combinations of tested factors. The time-to-detection model was utilized to estimate the time at which A. flavus visible growth begins. Water activity, temperature, and CEO concentration were the most important factors affecting fungal growth. It was observed that there is a range of possible combinations that may induce growth, such that incubation conditions and the amount of essential oil necessary for fungal growth inhibition strongly depend on protein and fat concentrations as well as on the pH of studied model systems. The probabilistic model and the time-to-detection models constitute another option to determine appropriate storage/processing conditions and accurately predict the probability and/or the time at which A. flavus growth occurs. Copyright © 2016 Elsevier B.V. All rights reserved.
On hydrodynamic phase field models for binary fluid mixtures
NASA Astrophysics Data System (ADS)
Yang, Xiaogang; Gong, Yuezheng; Li, Jun; Zhao, Jia; Wang, Qi
2018-05-01
Two classes of thermodynamically consistent hydrodynamic phase field models have been developed for binary fluid mixtures of incompressible viscous fluids of possibly different densities and viscosities. One is quasi-incompressible, while the other is incompressible. For the same binary fluid mixture of two incompressible viscous fluid components, which one is more appropriate? To answer this question, we conduct a comparative study in this paper. First, we visit their derivation, conservation and energy dissipation properties and show that the quasi-incompressible model conserves both mass and linear momentum, while the incompressible one does not. We then show that the quasi-incompressible model is sensitive to the density deviation of the fluid components, while the incompressible model is not in a linear stability analysis. Second, we conduct a numerical investigation on coarsening or coalescent dynamics of protuberances using the two models. We find that they can predict quite different transient dynamics depending on the initial conditions and the density difference although they predict essentially the same quasi-steady results in some cases. This study thus cast a doubt on the applicability of the incompressible model to describe dynamics of binary mixtures of two incompressible viscous fluids especially when the two fluid components have a large density deviation.
Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects
ERIC Educational Resources Information Center
Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.
2018-01-01
The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…
ERIC Educational Resources Information Center
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-01-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in…
The formation of Kuiper-belt binaries through exchange reactions.
Funato, Yoko; Makino, Junichiro; Hut, Piet; Kokubo, Eiichiro; Kinoshita, Daisuke
2004-02-05
Recent observations have revealed that an unexpectedly high fraction--a few per cent--of the trans-Neptunian objects (TNOs) that inhabit the Kuiper belt are binaries. The components have roughly equal masses, with very eccentric orbits that are wider than a hundred times the radius of the primary. Standard theories of binary asteroid formation tend to produce close binaries with circular orbits, so two models have been proposed to explain the unique characteristics of the TNOs. Both models, however, require extreme assumptions regarding the size distribution of the TNOs. Here we report a mechanism that is capable of producing binary TNOs with the observed properties during the early stages of their formation and growth. The only required assumption is that the TNOs were initially formed through gravitational instabilities in the protoplanetary dust disk. The basis of the mechanism is an exchange reaction in which a binary whose primary component is much more massive than the secondary interacts with a third body, whose mass is comparable to that of the primary. The low-mass secondary component is ejected and replaced by the third body in a wide but eccentric orbit.
Assessing Shape Characteristics of Jupiter Trojans in the Kepler Campaign 6 Field
NASA Astrophysics Data System (ADS)
Sharkey, Benjamin; Ryan, Erin L.; Woodward, Charles E.
2017-10-01
We report estimates of spin pole orientations and body-centric axis ratios of nine Jupiter Trojan asteroids through convex shape models derived from Kepler K2 photometry. Our sample contains single-component as well as candidate binary systems (identified through lightcurve features). Photometric baselines on the targets covered 7 to 93 full rotation periods. By incorporating a bias against highly elongated physical shapes, spin vector orientations of single-component systems were constrained to several discrete regions. Single-component convex models failed to converge on two binary candidates while two others demonstrated pronounced tapering that may be consistent with concavities of contact binaries. Further work to create two-component models is likely necessary to constrain the candidate binary targets. We find that Kepler K2 photometry provides robust datasets capable of providing detailed information on physical shape parameters of Jupiter Trojans.
NASA Astrophysics Data System (ADS)
Jacobson, S.; Scheeres, D.; Rossi, A.; Marzari, F.; Davis, D.
2014-07-01
From the results of a comprehensive asteroid-population-evolution model, we conclude that the YORP-induced rotational-fission hypothesis has strong repercussions for the small size end of the main-belt asteroid size-frequency distribution and is consistent with observed asteroid-population statistics and with the observed sub-populations of binary asteroids, asteroid pairs and contact binaries. The foundation of this model is the asteroid-rotation model of Marzari et al. (2011) and Rossi et al. (2009), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis (i.e. when the rotation rate exceeds a critical value, erosion and binary formation occur; Scheeres 2007) and binary-asteroid evolution (Jacobson & Scheeres, 2011). The YORP-effect timescale for large asteroids with diameters D > ˜ 6 km is longer than the collision timescale in the main belt, thus the frequency of large asteroids is determined by a collisional equilibrium (e.g. Bottke 2005), but for small asteroids with diameters D < ˜ 6 km, the asteroid-population evolution model confirms that YORP-induced rotational fission destroys small asteroids more frequently than collisions. Therefore, the frequency of these small asteroids is determined by an equilibrium between the creation of new asteroids out of the impact debris of larger asteroids and the destruction of these asteroids by YORP-induced rotational fission. By introducing a new source of destruction that varies strongly with size, YORP-induced rotational fission alters the slope of the size-frequency distribution. Using the outputs of the asteroid-population evolution model and a 1-D collision evolution model, we can generate this new size-frequency distribution and it matches the change in slope observed by the SKADS survey (Gladman 2009). This agreement is achieved with both an accretional power-law or a truncated ''Asteroids were Born Big'' size-frequency distribution (Weidenschilling 2010, Morbidelli 2009). The binary-asteroid evolution model is highly constrained by the modeling done in Jacobson & Scheeres, and therefore the asteroid-population evolution model has only two significant free parameters: the ratio of low-to-high-mass-ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. Using this model, we successfully reproduce the observed small-asteroid sub-populations, which orthogonally constrain the two free parameters. We find the outcome of rotational fission most likely produces an initial mass-ratio fraction that is four to eight times as likely to produce high-mass-ratio systems as low-mass-ratio systems, which is consistent with rotational fission creating binary systems in a flat distribution with respect to mass ratio. We also find that the mean of the log-normal BYORP coefficient distribution B ≈ 10^{-2}.
Ranasinghe, Priyanga; Perera, Yashasvi S; Lamabadusuriya, Dilusha A; Kulatunga, Supun; Jayawardana, Naveen; Rajapakse, Senaka; Katulanda, Prasad
2011-08-04
Complaints of arms, neck and shoulders (CANS) is common among computer office workers. We evaluated an aetiological model with physical/psychosocial risk-factors. We invited 2,500 computer office workers for the study. Data on prevalence and risk-factors of CANS were collected by validated Maastricht-Upper-extremity-Questionnaire. Workstations were evaluated by Occupational Safety and Health Administration (OSHA) Visual-Display-Terminal workstation-checklist. Participants' knowledge and awareness was evaluated by a set of expert-validated questions. A binary logistic regression analysis investigated relationships/correlations between risk-factors and symptoms. Sample size was 2,210. Mean age 30.8 ± 8.1 years, 50.8% were males. The 1-year prevalence of CANS was 56.9%, commonest region of complaint was forearm/hand (42.6%), followed by neck (36.7%) and shoulder/arm (32.0%). In those with CANS, 22.7% had taken treatment from a health care professional, only in 1.1% seeking medical advice an occupation-related injury had been suspected/diagnosed. In addition 9.3% reported CANS-related absenteeism from work, while 15.4% reported CANS causing disruption of normal activities. A majority of evaluated workstations in all participants (88.4%,) and in those with CANS (91.9%) had OSHA non-compliant workstations. In the binary logistic regression analyses female gender, daily computer usage, incorrect body posture, bad work-habits, work overload, poor social support and poor ergonomic knowledge were associated with CANS and its' severity In a multiple logistic regression analysis controlling for age, gender and duration of occupation, incorrect body posture, bad work-habits and daily computer usage were significant independent predictors of CANS. The prevalence of work-related CANS among computer office workers in Sri Lanka, a developing, South Asian country is high and comparable to prevalence in developed countries. Work-related physical factors, psychosocial factors and lack of awareness were all important associations of CANS and effective preventive strategies need to address all three areas.
Rubio-Álvarez, Ana; Molina-Alarcón, Milagros; Arias-Arias, Ángel; Hernández-Martínez, Antonio
2018-03-01
postpartum haemorrhage is one of the leading causes of maternal morbidity and mortality worldwide. Despite the use of uterotonics agents as preventive measure, it remains a challenge to identify those women who are at increased risk of postpartum bleeding. to develop and to validate a predictive model to assess the risk of excessive bleeding in women with vaginal birth. retrospective cohorts study. "Mancha-Centro Hospital" (Spain). the elaboration of the predictive model was based on a derivation cohort consisting of 2336 women between 2009 and 2011. For validation purposes, a prospective cohort of 953 women between 2013 and 2014 were employed. Women with antenatal fetal demise, multiple pregnancies and gestations under 35 weeks were excluded METHODS: we used a multivariate analysis with binary logistic regression, Ridge Regression and areas under the Receiver Operating Characteristic curves to determine the predictive ability of the proposed model. there was 197 (8.43%) women with excessive bleeding in the derivation cohort and 63 (6.61%) women in the validation cohort. Predictive factors in the final model were: maternal age, primiparity, duration of the first and second stages of labour, neonatal birth weight and antepartum haemoglobin levels. Accordingly, the predictive ability of this model in the derivation cohort was 0.90 (95% CI: 0.85-0.93), while it remained 0.83 (95% CI: 0.74-0.92) in the validation cohort. this predictive model is proved to have an excellent predictive ability in the derivation cohort, and its validation in a latter population equally shows a good ability for prediction. This model can be employed to identify women with a higher risk of postpartum haemorrhage. Copyright © 2017 Elsevier Ltd. All rights reserved.
Amiri, Mohammadreza; Majid, Hazreen Abdul; Hairi, FarizahMohd; Thangiah, Nithiah; Bulgiba, Awang; Su, Tin Tin
2014-01-01
The objectives are to assess the prevalence and determinants of cardiovascular disease (CVD) risk factors among the residents of Community Housing Projects in metropolitan Kuala Lumpur, Malaysia. By using simple random sampling, we selected and surveyed 833 households which comprised of 3,722 individuals. Out of the 2,360 adults, 50.5% participated in blood sampling and anthropometric measurement sessions. Uni and bivariate data analysis and multivariate binary logistic regression were applied to identify demographic and socioeconomic determinants of the existence of having at least one CVD risk factor. As a Result, while obesity (54.8%), hypercholesterolemia (51.5%), and hypertension (39.3%) were the most common CVD risk factors among the low-income respondents, smoking (16.3%), diabetes mellitus (7.8%) and alcohol consumption (1.4%) were the least prevalent. Finally, the results from the multivariate binary logistic model illustrated that compared to the Malays, the Indians were 41% less likely to have at least one of the CVD risk factors (OR = 0.59; 95% CI: 0.37 - 0.93). In Conclusion, the low-income individuals were at higher risk of developing CVDs. Prospective policies addressing preventive actions and increased awareness focusing on low-income communities are highly recommended and to consider age, gender, ethnic backgrounds, and occupation classes.
Mitra, Ruchira; Chaudhuri, Surabhi; Dutta, Debjani
2017-01-01
In the present investigation, growth kinetics of Kocuria marina DAGII during batch production of β-Cryptoxanthin (β-CRX) was studied by considering the effect of glucose and maltose as a single and binary substrate. The importance of mixed substrate over single substrate has been emphasised in the present study. Different mathematical models namely, the Logistic model for cell growth, the Logistic mass balance equation for substrate consumption and the Luedeking-Piret model for β-CRX production were successfully implemented. Model-based analyses for the single substrate experiments suggested that the concentrations of glucose and maltose higher than 7.5 and 10.0 g/L, respectively, inhibited the growth and β-CRX production by K. marina DAGII. The Han and Levenspiel model and the Luong product inhibition model accurately described the cell growth in glucose and maltose substrate systems with a R 2 value of 0.9989 and 0.9998, respectively. The effect of glucose and maltose as binary substrate was further investigated. The binary substrate kinetics was well described using the sum-kinetics with interaction parameters model. The results of production kinetics revealed that the presence of binary substrate in the cultivation medium increased the biomass and β-CRX yield significantly. This study is a first time detailed investigation on kinetic behaviours of K. marina DAGII during β-CRX production. The parameters obtained in the study might be helpful for developing strategies for commercial production of β-CRX by K. marina DAGII.
Predicting drug court outcome among amphetamine-using participants.
Wu, Lora J; Altshuler, Sandra J; Short, Robert A; Roll, John M
2012-06-01
Amphetamine use and abuse carry with it substantial social costs. Although there is a perception that amphetamine users are more difficult to treat than other substance users, drug courts have been used to effectively address drug-related crimes and hold the potential to lessen the impact of amphetamine abuse through efficacious treatment and rehabilitation. The objective of this study was to identify predictors of drug court outcome among amphetamine-using participants. A drug court database was obtained (N = 540) and amphetamine-using participants (n= 341) identified. Multivariate binary regression models run for the amphetamine-using participants identified being employed and being a parent as predictive of successful completion of the program, whereas being sanctioned to jail during the program was inversely related to program completion. Copyright © 2012 Elsevier Inc. All rights reserved.
Kinetics of hydrogen peroxide decomposition by catalase: hydroxylic solvent effects.
Raducan, Adina; Cantemir, Anca Ruxandra; Puiu, Mihaela; Oancea, Dumitru
2012-11-01
The effect of water-alcohol (methanol, ethanol, propan-1-ol, propan-2-ol, ethane-1,2-diol and propane-1,2,3-triol) binary mixtures on the kinetics of hydrogen peroxide decomposition in the presence of bovine liver catalase is investigated. In all solvents, the activity of catalase is smaller than in water. The results are discussed on the basis of a simple kinetic model. The kinetic constants for product formation through enzyme-substrate complex decomposition and for inactivation of catalase are estimated. The organic solvents are characterized by several physical properties: dielectric constant (D), hydrophobicity (log P), concentration of hydroxyl groups ([OH]), polarizability (α), Kamlet-Taft parameter (β) and Kosower parameter (Z). The relationships between the initial rate, kinetic constants and medium properties are analyzed by linear and multiple linear regression.
More caregiving, less working: caregiving roles and gender difference.
Lee, Yeonjung; Tang, Fengyan
2015-06-01
This study examined the relationship of caregiving roles to labor force participation using the nationally representative data from the Health and Retirement Study. The sample was composed of men and women aged 50 to 61 years (N = 5,119). Caregiving roles included caregiving for spouse, parents, and grandchildren; a summary of three caregiving roles was used to indicate multiple caregiving roles. Bivariate analysis using chi-square and t tests and binary logistic regression models were applied. Results show that women caregivers for parents and/or grandchildren were less likely to be in the labor force than non-caregivers and that caregiving responsibility was not related to labor force participation for the sample of men. Findings have implication for supporting family caregivers, especially women, to balance work and caregiving commitments. © The Author(s) 2013.
Speiser, Jaime Lynn; Lee, William M; Karvellas, Constantine J
2015-01-01
Assessing prognosis for acetaminophen-induced acute liver failure (APAP-ALF) patients often presents significant challenges. King's College (KCC) has been validated on hospital admission, but little has been published on later phases of illness. We aimed to improve determinations of prognosis both at the time of and following admission for APAP-ALF using Classification and Regression Tree (CART) models. CART models were applied to US ALFSG registry data to predict 21-day death or liver transplant early (on admission) and post-admission (days 3-7) for 803 APAP-ALF patients enrolled 01/1998-09/2013. Accuracy in prediction of outcome (AC), sensitivity (SN), specificity (SP), and area under receiver-operating curve (AUROC) were compared between 3 models: KCC (INR, creatinine, coma grade, pH), CART analysis using only KCC variables (KCC-CART) and a CART model using new variables (NEW-CART). Traditional KCC yielded 69% AC, 90% SP, 27% SN, and 0.58 AUROC on admission, with similar performance post-admission. KCC-CART at admission offered predictive 66% AC, 65% SP, 67% SN, and 0.74 AUROC. Post-admission, KCC-CART had predictive 82% AC, 86% SP, 46% SN and 0.81 AUROC. NEW-CART models using MELD (Model for end stage liver disease), lactate and mechanical ventilation on admission yielded predictive 72% AC, 71% SP, 77% SN and AUROC 0.79. For later stages, NEW-CART (MELD, lactate, coma grade) offered predictive AC 86%, SP 91%, SN 46%, AUROC 0.73. CARTs offer simple prognostic models for APAP-ALF patients, which have higher AUROC and SN than KCC, with similar AC and negligibly worse SP. Admission and post-admission predictions were developed. • Prognostication in acetaminophen-induced acute liver failure (APAP-ALF) is challenging beyond admission • Little has been published regarding the use of King's College Criteria (KCC) beyond admission and KCC has shown limited sensitivity in subsequent studies • Classification and Regression Tree (CART) methodology allows the development of predictive models using binary splits and offers an intuitive method for predicting outcome, using processes familiar to clinicians • Data from the ALFSG registry suggested that CART prognosis models for the APAP population offer improved sensitivity and model performance over traditional regression-based KCC, while maintaining similar accuracy and negligibly worse specificity • KCC-CART models offered modest improvement over traditional KCC, with NEW-CART models performing better than KCC-CART particularly at late time points.
NASA Astrophysics Data System (ADS)
Shi, Yu; Wang, Yue; Xu, Shijie
2017-11-01
Binary systems are quite common within the populations of near-Earth asteroids, main-belt asteroids, and Kuiper belt asteroids. The dynamics of binary systems, which can be modeled as the full two-body problem, is a fundamental problem for their evolution and the design of relevant space missions. This paper proposes a new shape-based model for the mutual gravitational potential of binary asteroids, differing from prior approaches such as inertia integrals, spherical harmonics, or symmetric trace-free tensors. One asteroid is modeled as a homogeneous polyhedron, while the other is modeled as an extended rigid body with arbitrary mass distribution. Since the potential of the polyhedron is precisely described in a closed form, the mutual gravitational potential can be formulated as a volume integral over the extended body. By using Taylor expansion, the mutual potential is then derived in terms of inertia integrals of the extended body, derivatives of the polyhedron's potential, and the relative location and orientation between the two bodies. The gravitational forces and torques acting on the two bodies described in the body-fixed frame of the polyhedron are derived in the form of a second-order expansion. The gravitational model is then used to simulate the evolution of the binary asteroid (66391) 1999 KW4, and compared with previous results in the literature.
PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prša, A.; Conroy, K. E.; Horvat, M.
The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but weremore » not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.« less
The Eclipsing Binary On-Line Atlas (EBOLA)
NASA Astrophysics Data System (ADS)
Bradstreet, D. H.; Steelman, D. P.; Sanders, S. J.; Hargis, J. R.
2004-05-01
In conjunction with the upcoming release of \\it Binary Maker 3.0, an extensive on-line database of eclipsing binaries is being made available. The purposes of the atlas are: \\begin {enumerate} Allow quick and easy access to information on published eclipsing binaries. Amass a consistent database of light and radial velocity curve solutions to aid in solving new systems. Provide invaluable querying capabilities on all of the parameters of the systems so that informative research can be quickly accomplished on a multitude of published results. Aid observers in establishing new observing programs based upon stars needing new light and/or radial velocity curves. Encourage workers to submit their published results so that others may have easy access to their work. Provide a vast but easily accessible storehouse of information on eclipsing binaries to accelerate the process of understanding analysis techniques and current work in the field. \\end {enumerate} The database will eventually consist of all published eclipsing binaries with light curve solutions. The following information and data will be supplied whenever available for each binary: original light curves in all bandpasses, original radial velocity observations, light curve parameters, RA and Dec, V-magnitudes, spectral types, color indices, periods, binary type, 3D representation of the system near quadrature, plots of the original light curves and synthetic models, plots of the radial velocity observations with theoretical models, and \\it Binary Maker 3.0 data files (parameter, light curve, radial velocity). The pertinent references for each star are also given with hyperlinks directly to the papers via the NASA Abstract website for downloading, if available. In addition the Atlas has extensive searching options so that workers can specifically search for binaries with specific characteristics. The website has more than 150 systems already uploaded. The URL for the site is http://ebola.eastern.edu/.
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Binary Model for the Heartbeat Star System KIC 4142768
NASA Astrophysics Data System (ADS)
Manuel, Joseph; Hambleton, Kelly
2018-01-01
Heartbeat stars are a class of eccentric (e > 0.2) binary systems that undergo strong tidal forces. These tidal forces cause the shape of each star and the temperature across the stellar surfaces to change. This effect also generates variations in the light curve in the form of tidally-induced pulsations, which are theorized to have a significant effect on the circularization of eccentric orbits (Zahn, 1975). Using the binary modeling software PHOEBE (Prša & Zwitter 2005) on the Kepler photometric data and Keck radial velocity data for the eclipsing, heartbeat star KIC 4142768, we have determined the fundamental parameters including masses and radii. The frequency analysis of the residual data has surprisingly revealed approximately 29 pulsations with 8 being Delta Scuti pulsations, 10 being Gamma Doradus pulsations, and 11 being tidally-induced pulsations. After subtracting an initial binary model from the original, detrended photometric data, we analyzed the pulsation frequencies in the residual data. We then were able to disentangle the identified pulsations from the original data in order to conduct subsequent binary modeling. We plan to continue this study by applying asteroseismology to KIC 4142768. Through our continued investigation, we hope to extract information about the star’s internal structure and expect this will yield additional, interesting results.
The close binary frequency of Wolf-Rayet stars as a function of metallicity in M31 and M33
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neugent, Kathryn F.; Massey, Philip, E-mail: kneugent@lowell.edu, E-mail: phil.massey@lowell.edu
Massive star evolutionary models generally predict the correct ratio of WC-type and WN-type Wolf-Rayet stars at low metallicities, but underestimate the ratio at higher (solar and above) metallicities. One possible explanation for this failure is perhaps single-star models are not sufficient and Roche-lobe overflow in close binaries is necessary to produce the 'extra' WC stars at higher metallicities. However, this would require the frequency of close massive binaries to be metallicity dependent. Here we test this hypothesis by searching for close Wolf-Rayet binaries in the high metallicity environments of M31 and the center of M33 as well as in themore » lower metallicity environments of the middle and outer regions of M33. After identifying ∼100 Wolf-Rayet binaries based on radial velocity variations, we conclude that the close binary frequency of Wolf-Rayets is not metallicity dependent and thus other factors must be responsible for the overabundance of WC stars at high metallicities. However, our initial identifications and observations of these close binaries have already been put to good use as we are currently observing additional epochs for eventual orbit and mass determinations.« less
Estimating biodegradation half-lives for use in chemical screening.
Aronson, Dallas; Boethling, Robert; Howard, Philip; Stiteler, William
2006-06-01
Biodegradation half-lives are needed for many applications in chemical screening, but these data are not available for most chemicals. To address this, in phase one of this work we correlated the much more abundant ready and inherent biodegradation test data with measured half-lives for water and soil. In phase two, we explored the utility of the BIOWIN models (in EPI Suite) and molecular fragments for predicting half-lives. BIOWIN model output was correlated directly with measured half-lives, and new models were developed by re-regressing the BIOWIN fragments against the half-lives. All of these approaches gave the best results when used for binary (fast/slow) classification of half-lives, with accuracy generally in the 70-80% range. In the last phase, we used the collected half-life data to examine the default half-lives assigned by EPI Suite and the PBT Profiler for use as input to their level III multimedia models. It is concluded that estimated half-lives should not be used for purposes other than binning or prioritizing chemicals unless accuracy improves significantly.
NASA Astrophysics Data System (ADS)
Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel
2017-06-01
Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.
Exploring stellar evolution with gravitational-wave observations
NASA Astrophysics Data System (ADS)
Dvorkin, Irina; Uzan, Jean-Philippe; Vangioni, Elisabeth; Silk, Joseph
2018-05-01
Recent detections of gravitational waves from merging binary black holes opened new possibilities to study the evolution of massive stars and black hole formation. In particular, stellar evolution models may be constrained on the basis of the differences in the predicted distribution of black hole masses and redshifts. In this work we propose a framework that combines galaxy and stellar evolution models and use it to predict the detection rates of merging binary black holes for various stellar evolution models. We discuss the prospects of constraining the shape of the time delay distribution of merging binaries using just the observed distribution of chirp masses. Finally, we consider a generic model of primordial black hole formation and discuss the possibility of distinguishing it from stellar-origin black holes.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.
2011-01-01
Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252
Ghaddar, Suad; Brown, Cynthia J; Pagán, José A; Díaz, Violeta
2010-09-01
To explore the relationship between acculturation and healthy lifestyle habits in the largely Hispanic populations living in underserved communities in the United States of America along the U.S.-Mexico border. A cross-sectional study was conducted from April 2006 to June 2008 using survey data from the Alliance for a Healthy Border, a program designed to reduce health disparities in the U.S.-Mexico border region by funding nutrition and physical activity education programs at 12 federally qualified community health centers in Arizona, California, New Mexico, and Texas. The survey included questions on acculturation, diet, exercise, and demographic factors and was completed by 2,381 Alliance program participants, of whom 95.3% were Hispanic and 45.4% were under the U.S. poverty level for 2007. Chi-square (χ2) and Student's t tests were used for bivariate comparisons between acculturation and dietary and physical activity measures. Linear regression and binary logistic regression were used to control for factors associated with nutrition and exercise. Based on univariate tests and confirmed by regression analysis controlling for sociodemographic and health variables, less acculturated survey respondents reported a significantly higher frequency of fruit and vegetable consumption and healthier dietary habits than those who were more acculturated. Adjusted binary logistic regression confirmed that individuals with low language acculturation were less likely to engage in physical activity than those with moderate to high acculturation (odds ratio 0.75, 95% confidence interval 0.59-0.95). Findings confirmed an association between acculturation and healthy lifestyle habits and supported the hypothesis that acculturation in border community populations tends to decrease the practice of some healthy dietary habits while increasing exposure to and awareness of the importance of other healthy behaviors.
Gao, Hengyi; Zhu, Feng; Wang, Min; Zhang, Hang; Ye, Dawei; Yang, Jiayin; Jiang, Li; Liu, Chang; Qin, Renyi; Yan, Lunan; Xiao, Guangqin
2017-01-01
Background Advanced liver fibrosis can result in serious complications (even patient’s death) after partial hepatectomy. Preoperatively percutaneous liver biopsy is an invasive and expensive method to assess liver fibrosis. We aim to establish a noninvasive model, on the basis of preoperative biomarkers, to predict liver fibrosis in hepatocellular carcinoma (HCC) patients with hepatitis B virus (HBV) infection. Methods The HBV-infected liver cancer patients who had received hepatectomy were retrospectively and prospectively enrolled in this study. Univariate analysis was used to compare the variables of the patients with mild to moderate liver fibrosis and with severe liver fibrosis. The significant factors were selected into binary logistic regression analysis. Factors determined to be significant were used to establish a noninvasive model. Then the diagnostic accuracy of this novel model was examined based on sensitivity, specificity and area under the receiver-operating characteristic curve (AUC). Results This study included 2,176 HBV-infected HCC patients who had undergone partial hepatectomy (1,682 retrospective subjects and 494 prospective subjects). Regression analysis indicated that total bilirubin and prothrombin time had positive correlation with liver fibrosis. It also demonstrated that blood platelet count and fibrinogen had negative correlation with liver fibrosis. The AUC values of the model based on these four factors for predicting significant fibrosis, advanced fibrosis and cirrhosis were 0.79-0.83, 0.83-0.85 and 0.85-0.88, respectively. Conclusion The results showed that this novel preoperative model was an excellent noninvasive method for assessing liver fibrosis in HBV-infected HCC patients. PMID:28008144
The modelling of heat, mass and solute transport in solidification systems
NASA Technical Reports Server (NTRS)
Voller, V. R.; Brent, A. D.; Prakash, C.
1989-01-01
The aim of this paper is to explore the range of possible one-phase models of binary alloy solidification. Starting from a general two-phase description, based on the two-fluid model, three limiting cases are identified which result in one-phase models of binary systems. Each of these models can be readily implemented in standard single phase flow numerical codes. Differences between predictions from these models are examined. In particular, the effects of the models on the predicted macro-segregation patterns are evaluated.
Modeling Spatial Relationships within a Fuzzy Framework.
ERIC Educational Resources Information Center
Petry, Frederick E.; Cobb, Maria A.
1998-01-01
Presents a model for representing and storing binary topological and directional relationships between 2-dimensional objects that is used to provide a basis for fuzzy querying capabilities. A data structure called an abstract spatial graph (ASG) is defined for the binary relationships that maintains all necessary information regarding topology and…
Using binary statistics in Taurus-Auriga to distinguish between brown dwarf formation processes
NASA Astrophysics Data System (ADS)
Marks, M.; Martín, E. L.; Béjar, V. J. S.; Lodieu, N.; Kroupa, P.; Manjavacas, E.; Thies, I.; Rebolo López, R.; Velasco, S.
2017-08-01
Context. One of the key questions of the star formation problem is whether brown dwarfs (BDs) form in the manner of stars directly from the gravitational collapse of a molecular cloud core (star-like) or whether BDs and some very low-mass stars (VLMSs) constitute a separate population that forms alongside stars comparable to the population of planets, for example through circumstellar disk (peripheral) fragmentation. Aims: For young stars in Taurus-Auriga the binary fraction has been shown to be large with little dependence on primary mass above ≈ 0.2 M⊙, while for BDs the binary fraction is < 10%. Here we investigate a case in which BDs in Taurus formed dominantly, but not exclusively, through peripheral fragmentation, which naturally results in small binary fractions. The decline of the binary frequency in the transition region between star-like formation and peripheral formation is modelled. Methods: We employed a dynamical population synthesis model in which stellar binary formation is universal with a large binary fraction close to unity. Peripheral objects form separately in circumstellar disks with a distinctive initial mass function (IMF), their own orbital parameter distributions for binaries, and small binary fractions, according to observations and expectations from smoothed particle hydrodynamics (SPH) and grid-based computations. A small amount of dynamical processing of the stellar component was accounted for as appropriate for the low-density Taurus-Auriga embedded clusters. Results: The binary fraction declines strongly in the transition region between star-like and peripheral formation, exhibiting characteristic features. The location of these features and the steepness of this trend depend on the mass limits for star-like and peripheral formation. Such a trend might be unique to low density regions, such as Taurus, which host binary populations that are largely unprocessed dynamically in which the binary fraction is large for stars down to M-dwarfs and small for BDs. Conclusions: The existence of a strong decline in the binary fraction - primary mass diagram will become verifiable in future surveys on BD and VLMS binarity in the Taurus-Auriga star-forming region. The binary fraction - primary mass diagram is a diagnostic of the (non-)continuity of star formation along the mass scale, the separateness of the stellar and BD populations, and the dominant formation channel for BDs and BD binaries in regions of low stellar density hosting dynamically unprocessed populations.
A Multidimensional Ideal Point Item Response Theory Model for Binary Data
ERIC Educational Resources Information Center
Maydeu-Olivares, Albert; Hernandez, Adolfo; McDonald, Roderick P.
2006-01-01
We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model…
ERIC Educational Resources Information Center
Abrams, Laura S.; Terry, Diane; Franke, Todd M.
2011-01-01
In this study the authors examined the influence of length of participation in a community-based reentry program on the odds of reconviction in the juvenile and adult criminal justice systems. A structured telephone survey of reentry program alumni was conducted with 75 transition-age (18-25 year-old) young men. Binary logistic regression analysis…
ERIC Educational Resources Information Center
Meador, Ryan E.
2012-01-01
This study examined students who successfully applied for reinstatement after being academically dismissed for the first time in order to discover indicators of future success. This study examined 666 students' appeals filed at the DeVry University Kansas City campus between 2004 and 2009. Binary logistic regression was used to discover if a…
ERIC Educational Resources Information Center
Whipp, Joan L.; Geronime, Lara
2017-01-01
Correlation analysis was used to analyze what experiences before and during teacher preparation for 72 graduates of an urban teacher education program were associated with urban commitment, first job location, and retention in urban schools for 3 or more years. Binary logistic regression was then used to analyze whether urban K-12 schooling,…
The cost of acquiring public hunting access on family forests lands
Michael A. Kilgore; Stephanie A. Snyder; Joesph M. Schertz; Steven J. Taff
2008-01-01
To address the issue of declining access to private forest land in the United States for hunting, over 1,000 Minnesota family forest owners were surveyed to estimate the cost of acquiring non-exclusive public hunting access rights. The results indicate landowner interest in selling access rights is extremely modest. Using binary logistic regression, the mean annual...
ERIC Educational Resources Information Center
Duncan, Amie W.; Bishop, Somer L.
2015-01-01
Daily living skills standard scores on the Vineland Adaptive Behavior Scales-2nd edition were examined in 417 adolescents from the Simons Simplex Collection. All participants had at least average intelligence and a diagnosis of autism spectrum disorder. Descriptive statistics and binary logistic regressions were used to examine the prevalence and…
ERIC Educational Resources Information Center
Khowaja, Meena K.; Hazzard, Ann P.; Robins, Diana L.
2015-01-01
Parents (n = 11,845) completed the Modified Checklist for Autism in Toddlers (or its latest revision) at pediatric visits. Using sociodemographic predictors of maternal education and race, binary logistic regressions were utilized to examine differences in autism screening, diagnostic evaluation participation rates and outcomes, and reasons for…
ERIC Educational Resources Information Center
Toutkoushian, Robert K.; Hossler, Don; DesJardins, Stephen L.; McCall, Brian; Gonzalez Canche, Manuel S.
2015-01-01
Our study adds to prior work on Indiana's Twenty-first Century Scholars(TFCS) program by focusing on whether participating in--rather than completing--the program affects the likelihood of students going to college and where they initially enrolled. We first employ binary and multinomial logistic regression to obtain estimates of the impact of the…
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Li, Y.; Graubard, B. I.; Huang, P.; Gastwirth, J. L.
2015-01-01
Determining the extent of a disparity, if any, between groups of people, for example, race or gender, is of interest in many fields, including public health for medical treatment and prevention of disease. An observed difference in the mean outcome between an advantaged group (AG) and disadvantaged group (DG) can be due to differences in the distribution of relevant covariates. The Peters–Belson (PB) method fits a regression model with covariates to the AG to predict, for each DG member, their outcome measure as if they had been from the AG. The difference between the mean predicted and the mean observed outcomes of DG members is the (unexplained) disparity of interest. We focus on applying the PB method to estimate the disparity based on binary/multinomial/proportional odds logistic regression models using data collected from complex surveys with more than one DG. Estimators of the unexplained disparity, an analytic variance–covariance estimator that is based on the Taylor linearization variance–covariance estimation method, as well as a Wald test for testing a joint null hypothesis of zero for unexplained disparities between two or more minority groups and a majority group, are provided. Simulation studies with data selected from simple random sampling and cluster sampling, as well as the analyses of disparity in body mass index in the National Health and Nutrition Examination Survey 1999–2004, are conducted. Empirical results indicate that the Taylor linearization variance–covariance estimation is accurate and that the proposed Wald test maintains the nominal level. PMID:25382235
Lee, Chia Ee; Vincent-Chong, Vui King; Ramanathan, Anand; Kallarakkal, Thomas George; Karen-Ng, Lee Peng; Ghani, Wan Maria Nabillah; Rahman, Zainal Ariff Abdul; Ismail, Siti Mazlipah; Abraham, Mannil Thomas; Tay, Keng Kiong; Mustafa, Wan Mahadzir Wan; Cheong, Sok Ching; Zain, Rosnah Binti
2015-01-01
BACKGROUND: Collagen Triple Helix Repeat Containing 1 (CTHRC1) is a protein often found to be over-expressed in various types of human cancers. However, correlation between CTHRC1 expression level with clinico-pathological characteristics and prognosis in oral cancer remains unclear. Therefore, this study aimed to determine mRNA and protein expression of CTHRC1 in oral squamous cell carcinoma (OSCC) and to evaluate the clinical and prognostic impact of CTHRC1 in OSCC. METHODS: In this study, mRNA and protein expression of CTHRC1 in OSCCs were determined by quantitative PCR and immunohistochemistry, respectively. The association between CTHRC1 and clinico-pathological parameters were evaluated by univariate and multivariate binary logistic regression analyses. Correlation between CTHRC1 protein expressions with survival were analysed using Kaplan-Meier and Cox regression models. RESULTS: Current study demonstrated CTHRC1 was significantly overexpressed at the mRNA level in OSCC. Univariate analyses indicated a high-expression of CTHRC1 that was significantly associated with advanced stage pTNM staging, tumour size ≥ 4 cm and positive lymph node metastasis (LNM). However, only positive LNM remained significant after adjusting with other confounder factors in multivariate logistic regression analyses. Kaplan-Meier survival analyses and Cox model demonstrated that patients with high-expression of CTHRC1 protein were associated with poor prognosis and is an independent prognostic factor in OSCC. CONCLUSION: This study indicated that over-expression of CTHRC1 potentially as an independent predictor for positive LNM and poor prognosis in OSCC. PMID:26664254
SNPs selection using support vector regression and genetic algorithms in GWAS
2014-01-01
Introduction This paper proposes a new methodology to simultaneously select the most relevant SNPs markers for the characterization of any measurable phenotype described by a continuous variable using Support Vector Regression with Pearson Universal kernel as fitness function of a binary genetic algorithm. The proposed methodology is multi-attribute towards considering several markers simultaneously to explain the phenotype and is based jointly on statistical tools, machine learning and computational intelligence. Results The suggested method has shown potential in the simulated database 1, with additive effects only, and real database. In this simulated database, with a total of 1,000 markers, and 7 with major effect on the phenotype and the other 993 SNPs representing the noise, the method identified 21 markers. Of this total, 5 are relevant SNPs between the 7 but 16 are false positives. In real database, initially with 50,752 SNPs, we have reduced to 3,073 markers, increasing the accuracy of the model. In the simulated database 2, with additive effects and interactions (epistasis), the proposed method matched to the methodology most commonly used in GWAS. Conclusions The method suggested in this paper demonstrates the effectiveness in explaining the real phenotype (PTA for milk), because with the application of the wrapper based on genetic algorithm and Support Vector Regression with Pearson Universal, many redundant markers were eliminated, increasing the prediction and accuracy of the model on the real database without quality control filters. The PUK demonstrated that it can replicate the performance of linear and RBF kernels. PMID:25573332
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timchalk, Chuck; Poet, Torka S.
2008-05-01
Physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) models have been developed and validated for the organophosphorus (OP) insecticides chlorpyrifos (CPF) and diazinon (DZN). Based on similar pharmacokinetic and mode of action properties it is anticipated that these OPs could interact at a number of important metabolic steps including: CYP450 mediated activation/detoxification, and blood/tissue cholinesterase (ChE) binding/inhibition. We developed a binary PBPK/PD model for CPF, DZN and their metabolites based on previously published models for the individual insecticides. The metabolic interactions (CYP450) between CPF and DZN were evaluated in vitro and suggests that CPF is more substantially metabolized to its oxon metabolite than ismore » DZN. These data are consistent with their observed in vivo relative potency (CPF>DZN). Each insecticide inhibited the other’s in vitro metabolism in a concentration-dependent manner. The PBPK model code used to described the metabolism of CPF and DZN was modified to reflect the type of inhibition kinetics (i.e. competitive vs. non-competitive). The binary model was then evaluated against previously published rodent dosimetry and ChE inhibition data for the mixture. The PBPK/PD model simulations of the acute oral exposure to single- (15 mg/kg) vs. binary-mixtures (15+15 mg/kg) of CFP and DZN at this lower dose resulted in no differences in the predicted pharmacokinetics of either the parent OPs or their respective metabolites; whereas, a binary oral dose of CPF+DZN at 60+60 mg/kg did result in observable changes in the DZN pharmacokinetics. Cmax was more reasonably fit by modifying the absorption parameters. It is anticipated that at low environmentally relevant binary doses, most likely to be encountered in occupational or environmental related exposures, that the pharmacokinetics are expected to be linear, and ChE inhibition dose-additive.« less
Probing Ultracool Atmospheres and Substellar Interiors with Dynamical Masses
NASA Astrophysics Data System (ADS)
Dupuy, Trent
2010-09-01
After years of patient orbital monitoring, there is now a large sample of very low-mass stars and brown dwarfs with precise { 5%} dynamical masses. These binaries represent the gold standard for testing substellar theoretical models. Work to date has identified problems with the model-predicted broad-band colors, effective temperatures, and possibly even luminosity evolution with age. However, our ability to test models is currently limited by how well the individual components of these highly prized binaries are characterized. To solve this problem, we propose to use NICMOS and STIS to characterize this first large sample of ultracool binaries with well-determined dynamical masses. We will use NICMOS multi-band photometry to measure the SEDs of the binary components and thereby precisely estimate their spectral types and effective temperatures. We will use STIS to obtain resolved spectroscopy of the Li I doublet at 6708 A for a subset of three binaries whose masses lie very near the theoretical mass limit for lithium burning. The STIS data will provide the first ever resolved lithium measurements for brown dwarfs of known mass, enabling a direct probe of substellar interiors. Our proposed HST observations to characterize the components of these binaries is much less daunting in comparison to the years of orbital monitoring needed to yield dynamical masses, but these HST data are equally vital for robust tests of theory.
Constraining the Radiation and Plasma Environment of the Kepler Circumbinary Habitable-zone Planets
NASA Astrophysics Data System (ADS)
Zuluaga, Jorge I.; Mason, Paul A.; Cuartas-Restrepo, Pablo A.
2016-02-01
The discovery of many planets using the Kepler telescope includes 10 planets orbiting eight binary stars. Three binaries, Kepler-16, Kepler-47, and Kepler-453, have at least one planet in the circumbinary habitable zone (BHZ). We constrain the level of high-energy radiation and the plasma environment in the BHZ of these systems. With this aim, BHZ limits in these Kepler binaries are calculated as a function of time, and the habitability lifetimes are estimated for hypothetical terrestrial planets and/or moons within the BHZ. With the time-dependent BHZ limits established, a self-consistent model is developed describing the evolution of stellar activity and radiation properties as proxies for stellar aggression toward planetary atmospheres. Modeling binary stellar rotation evolution, including the effect of tidal interaction between stars in binaries, is key to establishing the environment around these systems. We find that Kepler-16 and its binary analogs provide a plasma environment favorable for the survival of atmospheres of putative Mars-sized planets and exomoons. Tides have modified the rotation of the stars in Kepler-47, making its radiation environment less harsh in comparison to the solar system. This is a good example of the mechanism first proposed by Mason et al. Kepler-453 has an environment similar to that of the solar system with slightly better than Earth radiation conditions at the inner edge of the BHZ. These results can be reproduced and even reparameterized as stellar evolution and binary tidal models progress, using our online tool http://bhmcalc.net.
CONSTRAINING THE RADIATION AND PLASMA ENVIRONMENT OF THE KEPLER CIRCUMBINARY HABITABLE-ZONE PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuluaga, Jorge I.; Mason, Paul A.; Cuartas-Restrepo, Pablo A.
The discovery of many planets using the Kepler telescope includes 10 planets orbiting eight binary stars. Three binaries, Kepler-16, Kepler-47, and Kepler-453, have at least one planet in the circumbinary habitable zone (BHZ). We constrain the level of high-energy radiation and the plasma environment in the BHZ of these systems. With this aim, BHZ limits in these Kepler binaries are calculated as a function of time, and the habitability lifetimes are estimated for hypothetical terrestrial planets and/or moons within the BHZ. With the time-dependent BHZ limits established, a self-consistent model is developed describing the evolution of stellar activity and radiation propertiesmore » as proxies for stellar aggression toward planetary atmospheres. Modeling binary stellar rotation evolution, including the effect of tidal interaction between stars in binaries, is key to establishing the environment around these systems. We find that Kepler-16 and its binary analogs provide a plasma environment favorable for the survival of atmospheres of putative Mars-sized planets and exomoons. Tides have modified the rotation of the stars in Kepler-47, making its radiation environment less harsh in comparison to the solar system. This is a good example of the mechanism first proposed by Mason et al. Kepler-453 has an environment similar to that of the solar system with slightly better than Earth radiation conditions at the inner edge of the BHZ. These results can be reproduced and even reparameterized as stellar evolution and binary tidal models progress, using our online tool http://bhmcalc.net.« less
On the development and applications of automated searches for eclipsing binary stars
NASA Astrophysics Data System (ADS)
Devor, Jonathan
Eclipsing binary star systems provide the most accurate method of measuring both the masses and radii of stars. Moreover, they enable testing tidal synchronization and circularization theories, as well as constraining models of stellar structure and dynamics. With the recent availability of large-scale multi-epoch photometric datasets, we are able to study eclipsing binary stars en masse. In this thesis, we analyzed 185,445 light curves from ten TrES fields, and 218,699 light curves from the OGLE II bulge fields. In order to manage such large quantities of data, we developed a pipeline with which we systematically identified eclipsing binaries, solved for their geometric orientations, and then found their components' absolute properties. Following this analysis, we assembled catalogs of eclipsing binaries with their models, computed statistical distributions of their properties, and located rare cases for further follow-up. Of particular importance are low-mass eclipsing binaries, which are rare, yet critical for resolving the ongoing mass-radius discrepancy between theoretical models and observations. To this end, we have discovered over a dozen new low-mass eclipsing binary candidates, and spectroscopically confirmed the masses of five of them. One of these confirmed candidates, T-Lyr1-17236, is especially interesting because of its uniquely long orbital period. We examined T-Lyr1-17236 in detail and found that it is consistent with the magnetic disruption hypothesis for explaining the observed mass-radius discrepancy. Both the source code of our pipeline and the complete list of our candidates are freely available.
SALT HRS discovery of a long-period double-degenerate binary in the planetary nebula NGC 1360
NASA Astrophysics Data System (ADS)
Miszalski, B.; Manick, R.; Mikołajewska, J.; Iłkiewicz, K.; Kamath, D.; Van Winckel, H.
2018-01-01
Whether planetary nebulae (PNe) are predominantly the product of binary stellar evolution as some population synthesis models (PSM) suggest remains an open question. Around 50 short-period binary central stars (P ∼ 1 d) are known, but with only four with measured orbital periods over 10 d, our knowledge is severely incomplete. Here we report on the first discovery from a systematic Southern African Large Telescope (SALT) High Resolution Spectrograph (HRS) survey for long-period binary central stars. We find a 142 d orbital period from radial velocities of the central star of NGC 1360, HIP 16566. NGC 1360 appears to be the product of common-envelope (CE) evolution, with nebula features similar to post-CE PNe, albeit with an orbital period considerably longer than expected to be typical of post-CE PSM. The most striking feature is a newly identified ring of candidate low-ionization structures. Previous spatiokinematic modelling of the nebula gives a nebula inclination of 30° ± 10°, and assuming the binary nucleus is coplanar with the nebula, multiwavelength observations best fit a more massive, evolved white dwarf (WD) companion. A WD companion in a 142 d orbit is not the focus of many PSM, making NGC 1360 a valuable system with which to improve future PSM work. HIP 16566 is amongst many central stars in which large radial velocity variability was found by low-resolution surveys. The discovery of its binary nature may indicate long-period binaries may be more common than PSM models predict.
Johnelle Sparks, P
2009-11-01
To examine disparities in low birthweight using a diverse set of racial/ethnic categories and a nationally representative sample. This research explored the degree to which sociodemographic characteristics, health care access, maternal health status, and health behaviors influence birthweight disparities among seven racial/ethnic groups. Binary logistic regression models were estimated using a nationally representative sample of singleton, normal for gestational age births from 2001 using the ECLS-B, which has an approximate sample size of 7,800 infants. The multiple variable models examine disparities in low birthweight (LBW) for seven racial/ethnic groups, including non-Hispanic white, non-Hispanic black, U.S.-born Mexican-origin Hispanic, foreign-born Mexican-origin Hispanic, other Hispanic, Native American, and Asian mothers. Race-stratified logistic regression models were also examined. In the full sample models, only non-Hispanic black mothers have a LBW disadvantage compared to non-Hispanic white mothers. Maternal WIC usage was protective against LBW in the full models. No prenatal care and adequate plus prenatal care increase the odds of LBW. In the race-stratified models, prenatal care adequacy and high maternal health risks are the only variables that influence LBW for all racial/ethnic groups. The race-stratified models highlight the different mechanism important across the racial/ethnic groups in determining LBW. Differences in the distribution of maternal sociodemographic, health care access, health status, and behavior characteristics by race/ethnicity demonstrate that a single empirical framework may distort associations with LBW for certain racial and ethnic groups. More attention must be given to the specific mechanisms linking maternal risk factors to poor birth outcomes for specific racial/ethnic groups.
Exploring students' patterns of reasoning
NASA Astrophysics Data System (ADS)
Matloob Haghanikar, Mojgan
As part of a collaborative study of the science preparation of elementary school teachers, we investigated the quality of students' reasoning and explored the relationship between sophistication of reasoning and the degree to which the courses were considered inquiry oriented. To probe students' reasoning, we developed open-ended written content questions with the distinguishing feature of applying recently learned concepts in a new context. We devised a protocol for developing written content questions that provided a common structure for probing and classifying students' sophistication level of reasoning. In designing our protocol, we considered several distinct criteria, and classified students' responses based on their performance for each criterion. First, we classified concepts into three types: Descriptive, Hypothetical, and Theoretical and categorized the abstraction levels of the responses in terms of the types of concepts and the inter-relationship between the concepts. Second, we devised a rubric based on Bloom's revised taxonomy with seven traits (both knowledge types and cognitive processes) and a defined set of criteria to evaluate each trait. Along with analyzing students' reasoning, we visited universities and observed the courses in which the students were enrolled. We used the Reformed Teaching Observation Protocol (RTOP) to rank the courses with respect to characteristics that are valued for the inquiry courses. We conducted logistic regression for a sample of 18courses with about 900 students and reported the results for performing logistic regression to estimate the relationship between traits of reasoning and RTOP score. In addition, we analyzed conceptual structure of students' responses, based on conceptual classification schemes, and clustered students' responses into six categories. We derived regression model, to estimate the relationship between the sophistication of the categories of conceptual structure and RTOP scores. However, the outcome variable with six categories required a more complicated regression model, known as multinomial logistic regression, generalized from binary logistic regression. With the large amount of collected data, we found that the likelihood of the higher cognitive processes were in favor of classes with higher measures on inquiry. However, the usage of more abstract concepts with higher order conceptual structures was less prevalent in higher RTOP courses.
Complete waveform model for compact binaries on eccentric orbits
NASA Astrophysics Data System (ADS)
Huerta, E. A.; Kumar, Prayush; Agarwal, Bhanu; George, Daniel; Schive, Hsi-Yu; Pfeiffer, Harald P.; Haas, Roland; Ren, Wei; Chu, Tony; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2017-01-01
We present a time domain waveform model that describes the inspiral, merger and ringdown of compact binary systems whose components are nonspinning, and which evolve on orbits with low to moderate eccentricity. The inspiral evolution is described using third-order post-Newtonian equations both for the equations of motion of the binary, and its far-zone radiation field. This latter component also includes instantaneous, tails and tails-of-tails contributions, and a contribution due to nonlinear memory. This framework reduces to the post-Newtonian approximant TaylorT4 at third post-Newtonian order in the zero-eccentricity limit. To improve phase accuracy, we also incorporate higher-order post-Newtonian corrections for the energy flux of quasicircular binaries and gravitational self-force corrections to the binding energy of compact binaries. This enhanced prescription for the inspiral evolution is combined with a fully analytical prescription for the merger-ringdown evolution constructed using a catalog of numerical relativity simulations. We show that this inspiral-merger-ringdown waveform model reproduces the effective-one-body model of Ref. [Y. Pan et al., Phys. Rev. D 89, 061501 (2014)., 10.1103/PhysRevD.89.061501] for quasicircular black hole binaries with mass ratios between 1 to 15 in the zero-eccentricity limit over a wide range of the parameter space under consideration. Using a set of eccentric numerical relativity simulations, not used during calibration, we show that our new eccentric model reproduces the true features of eccentric compact binary coalescence throughout merger. We use this model to show that the gravitational-wave transients GW150914 and GW151226 can be effectively recovered with template banks of quasicircular, spin-aligned waveforms if the eccentricity e0 of these systems when they enter the aLIGO band at a gravitational-wave frequency of 14 Hz satisfies e0GW 150914≤0.15 and e0GW 151226≤0.1 . We also find that varying the spin combinations of the quasicircular, spin-aligned template waveforms does not improve the recovery of nonspinning, eccentric signals when e0≥0.1 . This suggests that these two signal manifolds are predominantly orthogonal.
NASA Astrophysics Data System (ADS)
Taylor, Stephen R.; Simon, Joseph; Sampson, Laura
2017-01-01
The final parsec of supermassive black-hole binary evolution is subject to the complex interplay of stellar loss-cone scattering, circumbinary disk accretion, and gravitational-wave emission, with binary eccentricity affected by all of these. The strain spectrum of gravitational-waves in the pulsar-timing band thus encodes rich information about the binary population's response to these various environmental mechanisms. Current spectral models have heretofore followed basic analytic prescriptions, and attempt to investigate these final-parsec mechanisms in an indirect fashion. Here we describe a new technique to directly probe the environmental properties of supermassive black-hole binaries through "Bayesian model-emulation". We perform black-hole binary population synthesis simulations at a restricted set of environmental parameter combinations, compute the strain spectra from these, then train a Gaussian process to learn the shape of the spectrum at any point in parameter space. We describe this technique, demonstrate its efficacy with a program of simulated datasets, then illustrate its power by directly constraining final-parsec physics in a Bayesian analysis of the NANOGrav 5-year dataset. The technique is fast, flexible, and robust.
NASA Astrophysics Data System (ADS)
Taylor, Stephen; Simon, Joseph; Sampson, Laura
2017-01-01
The final parsec of supermassive black-hole binary evolution is subject to the complex interplay of stellar loss-cone scattering, circumbinary disk accretion, and gravitational-wave emission, with binary eccentricity affected by all of these. The strain spectrum of gravitational-waves in the pulsar-timing band thus encodes rich information about the binary population's response to these various environmental mechanisms. Current spectral models have heretofore followed basic analytic prescriptions, and attempt to investigate these final-parsec mechanisms in an indirect fashion. Here we describe a new technique to directly probe the environmental properties of supermassive black-hole binaries through ``Bayesian model-emulation''. We perform black-hole binary population synthesis simulations at a restricted set of environmental parameter combinations, compute the strain spectra from these, then train a Gaussian process to learn the shape of spectrum at any point in parameter space. We describe this technique, demonstrate its efficacy with a program of simulated datasets, then illustrate its power by directly constraining final-parsec physics in a Bayesian analysis of the NANOGrav 5-year dataset. The technique is fast, flexible, and robust.
Population of Nuclei Via 7Li-Induced Binary Reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, R M; Phair, L W; Descovich, M
2005-08-09
The authors have investigated the population of nuclei formed in binary reactions involving {sup 7}Li beams on targets of {sup 160}Gd and {sup 184}W. The {sup 7}Li + {sup 184}W data were taken in the first experiment using the LIBERACE Ge-array in combination with the STARS Si {Delta}E-E telescope system at the 88-Inch Cyclotron of the Lawrence Berkeley National Laboratory. By using the Wilczynski binary transfer model, in combination with a standard evaporation model, they are able to reproduce the experimental results. This is a useful method for predicting the population of neutron-rich heavy nuclei formed in binary reactions involvingmore » beams of weakly bound nuclei and will be of use in future spectroscopic studies.« less
Probing the Milky Way electron density using multi-messenger astronomy
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane
2015-04-01
Multi-messenger observations of ultra-compact binaries in both gravitational waves and electromagnetic radiation supply highly complementary information, providing new ways of characterizing the internal dynamics of these systems, as well as new probes of the galaxy itself. Electron density models, used in pulsar distance measurements via the electron dispersion measure, are currently not well constrained. Simultaneous radio and gravitational wave observations of pulsars in binaries provide a method of measuring the average electron density along the line of sight to the pulsar, thus giving a new method for constraining current electron density models. We present this method and assess its viability with simulations of the compact binary component of the Milky Way using the public domain binary evolution code, BSE. This work is supported by NASA Award NNX13AM10G.
Research in astrophysical processes
NASA Technical Reports Server (NTRS)
Ruderman, Malvin A.
1994-01-01
Work completed under this grant is summarized in the following areas:(1) radio pulsar turn on and evaporation of companions in very low mass x-ray binaries and in binary radio pulsar systems; (2) effects of magnetospheric pair production on the radiation from gamma-ray pulsars; (3) radiation transfer in the atmosphere of an illuminated companion star; (4) evaporation of millisecond pulsar companions;(5) formation of planets around pulsars; (6) gamma-ray bursts; (7) quasi-periodic oscillations in low mass x-ray binaries; (8) origin of high mass x-ray binaries, runaway OB stars, and the lower mass cutoff for core collapse supernovae; (9) dynamics of planetary atmospheres; (10) two point closure modeling of stationary, forced turbulence; (11) models for the general circulation of Saturn; and (12) compressible convection in stellar interiors.
NASA Astrophysics Data System (ADS)
Wu, Xiaoru; Gao, Yingyu; Ban, Chunlan; Huang, Qiang
2016-09-01
In this paper the results of the vapor-liquid equilibria study at 100 kPa are presented for two binary systems: α-phenylethylamine(1) + toluene (2) and (α-phenylethylamine(1) + cyclohexane(2)). The binary VLE data of the two systems were correlated by the Wilson, NRTL, and UNIQUAC models. For each binary system the deviations between the results of the correlations and the experimental data have been calculated. For the both binary systems the average relative deviations in temperature for the three models were lower than 0.99%. The average absolute deviations in vapour phase composition (mole fractions) and in temperature T were lower than 0.0271 and 1.93 K, respectively. Thermodynamic consistency has been tested for all vapor-liquid equilibrium data by the Herrington method. The values calculated by Wilson and NRTL equations satisfied the thermodynamics consistency test for the both two systems, while the values calculated by UNIQUAC equation didn't.
A stellar audit: the computation of encounter rates for 47 Tucanae and omega Centauri
NASA Astrophysics Data System (ADS)
Davies, Melvyn B.; Benz, Willy
1995-10-01
Using King-Mitchie models, we compute encounter rates between the various stellar species in the globular clusters omega Cen and 47 Tuc. We also compute event rates for encounters between single stars and a population of primordial binaries. Using these rates, and what we have learnt from hydrodynamical simulations of encounters performed earlier, we compute the production rates of objects such as low-mass X-ray binaries (LMXBs), smothered neutron stars and blue stragglers (massive main-sequence stars). If 10 per cent of the stars are contained in primordial binaries, the production rate of interesting objects from encounters involving these binaries is as large as that from encounters between single stars. For example, encounters involving binaries produce a significant number of blue stragglers in both globular cluster models. The number of smothered neutron stars may exceed the number of LMXBs by a factor of 5-20, which may help to explain why millisecond pulsars are observed to outnumber LMXBs in globular clusters.
Rényi entropy measure of noise-aided information transmission in a binary channel.
Chapeau-Blondeau, François; Rousseau, David; Delahaies, Agnès
2010-05-01
This paper analyzes a binary channel by means of information measures based on the Rényi entropy. The analysis extends, and contains as a special case, the classic reference model of binary information transmission based on the Shannon entropy measure. The extended model is used to investigate further possibilities and properties of stochastic resonance or noise-aided information transmission. The results demonstrate that stochastic resonance occurs in the information channel and is registered by the Rényi entropy measures at any finite order, including the Shannon order. Furthermore, in definite conditions, when seeking the Rényi information measures that best exploit stochastic resonance, then nontrivial orders differing from the Shannon case usually emerge. In this way, through binary information transmission, stochastic resonance identifies optimal Rényi measures of information differing from the classic Shannon measure. A confrontation of the quantitative information measures with visual perception is also proposed in an experiment of noise-aided binary image transmission.
A Gamma-Ray Burst Model Via Compressional Heating of Binary Neutron Stars
NASA Astrophysics Data System (ADS)
Salmonson, J. D.; Wilson, J. R.; Mathews, G. J.
1998-12-01
We present a model for gamma-ray bursts based on the compression of neutron stars in close binary systems. General relativistic (GR) simulations of close neutron star binaries have found compression of the neutron stars estimated to produce 1053 ergs of thermal neutrinos on a timescale of seconds. The hot neutron stars will emit neutrino pairs which will partially recombine to form 1051 to 1052 ergs of electron-positron (e^-e^+) pair plasma. GR hydrodynamic computational modeling of the e^-e^+ plasma flow and recombination yield a gamma-ray burst in good agreement with general characteristics (duration ~10 seconds, spectrum peak energy ~100 keV, total energy ~1051 ergs) of many observed gamma-ray bursts.
NASA Astrophysics Data System (ADS)
Liu, Michael C.; Dupuy, Trent J.; Leggett, S. K.
2010-10-01
Highly unequal-mass ratio binaries are rare among field brown dwarfs, with the mass ratio distribution of the known census described by q (4.9±0.7). However, such systems enable a unique test of the joint accuracy of evolutionary and atmospheric models, under the constraint of coevality for the individual components (the "isochrone test"). We carry out this test using two of the most extreme field substellar binaries currently known, the T1 + T6 epsilon Ind Bab binary and a newly discovered 0farcs14 T2.0 + T7.5 binary, 2MASS J12095613-1004008AB, identified with Keck laser guide star adaptive optics. The latter is the most extreme tight binary resolved to date (q ≈ 0.5). Based on the locations of the binary components on the Hertzsprung-Russell (H-R) diagram, current models successfully indicate that these two systems are coeval, with internal age differences of log(age) = -0.8 ± 1.3(-1.0+1.2 -1.3) dex and 0.5+0.4 -0.3(0.3+0.3 -0.4) dex for 2MASS J1209-1004AB and epsilon Ind Bab, respectively, as inferred from the Lyon (Tucson) models. However, the total mass of epsilon Ind Bab derived from the H-R diagram (≈ 80 M Jup using the Lyon models) is strongly discrepant with the reported dynamical mass. This problem, which is independent of the assumed age of the epsilon Ind Bab system, can be explained by a ≈ 50-100 K systematic error in the model atmosphere fitting, indicating slightly warmer temperatures for both components; bringing the mass determinations from the H-R diagram and the visual orbit into consistency leads to an inferred age of ≈ 6 Gyr for epsilon Ind Bab, older than previously assumed. Overall, the two T dwarf binaries studied here, along with recent results from T dwarfs in age and mass benchmark systems, yield evidence for small (≈100 K) errors in the evolutionary models and/or model atmospheres, but not significantly larger. Future parallax, resolved spectroscopy, and dynamical mass measurements for 2MASS J1209-1004AB will enable a more stringent application of the isochrone test. Finally, the binary nature of this object reduces its utility as the primary T3 near-IR spectral typing standard; we suggest SDSS J1206+2813 as a replacement. Most of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2014-05-21
Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Morgan, Katy E; Forbes, Andrew B; Keogh, Ruth H; Jairath, Vipul; Kahan, Brennan C
2017-01-30
In cluster randomised cross-over (CRXO) trials, clusters receive multiple treatments in a randomised sequence over time. In such trials, there is usual correlation between patients in the same cluster. In addition, within a cluster, patients in the same period may be more similar to each other than to patients in other periods. We demonstrate that it is necessary to account for these correlations in the analysis to obtain correct Type I error rates. We then use simulation to compare different methods of analysing a binary outcome from a two-period CRXO design. Our simulations demonstrated that hierarchical models without random effects for period-within-cluster, which do not account for any extra within-period correlation, performed poorly with greatly inflated Type I errors in many scenarios. In scenarios where extra within-period correlation was present, a hierarchical model with random effects for cluster and period-within-cluster only had correct Type I errors when there were large numbers of clusters; with small numbers of clusters, the error rate was inflated. We also found that generalised estimating equations did not give correct error rates in any scenarios considered. An unweighted cluster-level summary regression performed best overall, maintaining an error rate close to 5% for all scenarios, although it lost power when extra within-period correlation was present, especially for small numbers of clusters. Results from our simulation study show that it is important to model both levels of clustering in CRXO trials, and that any extra within-period correlation should be accounted for. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wheeler, David C.; Burstyn, Igor; Vermeulen, Roel; Yu, Kai; Shortreed, Susan M.; Pronk, Anjoeka; Stewart, Patricia A.; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Silverman, Debra T.; Friesen, Melissa C.
2014-01-01
Objectives Evaluating occupational exposures in population-based case-control studies often requires exposure assessors to review each study participants' reported occupational information job-by-job to derive exposure estimates. Although such assessments likely have underlying decision rules, they usually lack transparency, are time-consuming and have uncertain reliability and validity. We aimed to identify the underlying rules to enable documentation, review, and future use of these expert-based exposure decisions. Methods Classification and regression trees (CART, predictions from a single tree) and random forests (predictions from many trees) were used to identify the underlying rules from the questionnaire responses and an expert's exposure assignments for occupational diesel exhaust exposure for several metrics: binary exposure probability and ordinal exposure probability, intensity, and frequency. Data were split into training (n=10,488 jobs), testing (n=2,247), and validation (n=2,248) data sets. Results The CART and random forest models' predictions agreed with 92–94% of the expert's binary probability assignments. For ordinal probability, intensity, and frequency metrics, the two models extracted decision rules more successfully for unexposed and highly exposed jobs (86–90% and 57–85%, respectively) than for low or medium exposed jobs (7–71%). Conclusions CART and random forest models extracted decision rules and accurately predicted an expert's exposure decisions for the majority of jobs and identified questionnaire response patterns that would require further expert review if the rules were applied to other jobs in the same or different study. This approach makes the exposure assessment process in case-control studies more transparent and creates a mechanism to efficiently replicate exposure decisions in future studies. PMID:23155187