Sample records for two-parameter logistic model

  1. An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho

    Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…

  2. Equal Area Logistic Estimation for Item Response Theory

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li

    2009-08-01

    Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.

  3. On Interpreting the Model Parameters for the Three Parameter Logistic Model

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…

  4. An Application of a Multidimensional Extension of the Two-Parameter Logistic Latent Trait Model.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A latent trait model is described that is appropriate for use with tests that measure more than one dimension, and its application to both real and simulated test data is demonstrated. Procedures for estimating the parameters of the model are presented. The research objectives are to determine whether the two-parameter logistic model more…

  5. Use of Robust z in Detecting Unstable Items in Item Response Theory Models

    ERIC Educational Resources Information Center

    Huynh, Huynh; Meyer, Patrick

    2010-01-01

    The first part of this paper describes the use of the robust z[subscript R] statistic to link test forms using the Rasch (or one-parameter logistic) model. The procedure is then extended to the two-parameter and three-parameter logistic and two-parameter partial credit (2PPC) models. A real set of data was used to illustrate the extension. The…

  6. Careful with Those Priors: A Note on Bayesian Estimation in Two-Parameter Logistic Item Response Theory Models

    ERIC Educational Resources Information Center

    Marcoulides, Katerina M.

    2018-01-01

    This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…

  7. Some Observations on the Identification and Interpretation of the 3PL IRT Model

    ERIC Educational Resources Information Center

    Azevedo, Caio Lucidius Naberezny

    2009-01-01

    The paper by Maris, G., & Bechger, T. (2009) entitled, "On the Interpreting the Model Parameters for the Three Parameter Logistic Model," addressed two important questions concerning the three parameter logistic (3PL) item response theory (IRT) model (and in a broader sense, concerning all IRT models). The first one is related to the model…

  8. An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…

  9. An Extension of the Concept of Specific Objectivity.

    ERIC Educational Resources Information Center

    Irtel, Hans

    1995-01-01

    Comparisons of subjects are specifically objective if they do not depend on the items involved. Such comparisons are not restricted to the one-parameter logistic latent trait model but may also be defined within ordinal independence models and even within the two-parameter logistic model. (Author)

  10. Standard Errors and Confidence Intervals from Bootstrapping for Ramsay-Curve Item Response Theory Model Item Parameters

    ERIC Educational Resources Information Center

    Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.

    2011-01-01

    Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…

  11. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  12. A Comparison of the One-and Three-Parameter Logistic Models on Measures of Test Efficiency.

    ERIC Educational Resources Information Center

    Benson, Jeri

    Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…

  13. Logistic regression for dichotomized counts.

    PubMed

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  14. An Evaluation of One- and Three-Parameter Logistic Tailored Testing Procedures for Use with Small Item Pools.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A two-stage study was conducted to compare the ability estimates yielded by tailored testing procedures based on the one-parameter logistic (1PL) and three-parameter logistic (3PL) models. The first stage of the study employed real data, while the second stage employed simulated data. In the first stage, response data for 3,000 examinees were…

  15. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  16. Kinetic compensation effect in logistic distributed activation energy model for lignocellulosic biomass pyrolysis.

    PubMed

    Xu, Di; Chai, Meiyun; Dong, Zhujun; Rahman, Md Maksudur; Yu, Xi; Cai, Junmeng

    2018-06-04

    The kinetic compensation effect in the logistic distributed activation energy model (DAEM) for lignocellulosic biomass pyrolysis was investigated. The sum of square error (SSE) surface tool was used to analyze two theoretically simulated logistic DAEM processes for cellulose and xylan pyrolysis. The logistic DAEM coupled with the pattern search method for parameter estimation was used to analyze the experimental data of cellulose pyrolysis. The results showed that many parameter sets of the logistic DAEM could fit the data at different heating rates very well for both simulated and experimental processes, and a perfect linear relationship between the logarithm of the frequency factor and the mean value of the activation energy distribution was found. The parameters of the logistic DAEM can be estimated by coupling the optimization method and isoconversional kinetic methods. The results would be helpful for chemical kinetic analysis using DAEM. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A development of logistics management models for the Space Transportation System

    NASA Technical Reports Server (NTRS)

    Carrillo, M. J.; Jacobsen, S. E.; Abell, J. B.; Lippiatt, T. F.

    1983-01-01

    A new analytic queueing approach was described which relates stockage levels, repair level decisions, and the project network schedule of prelaunch operations directly to the probability distribution of the space transportation system launch delay. Finite source population and limited repair capability were additional factors included in this logistics management model developed specifically for STS maintenance requirements. Data presently available to support logistics decisions were based on a comparability study of heavy aircraft components. A two-phase program is recommended by which NASA would implement an integrated data collection system, assemble logistics data from previous STS flights, revise extant logistics planning and resource requirement parameters using Bayes-Lin techniques, and adjust for uncertainty surrounding logistics systems performance parameters. The implementation of these recommendations can be expected to deliver more cost-effective logistics support.

  18. The Utility of IRT in Small-Sample Testing Applications.

    ERIC Educational Resources Information Center

    Sireci, Stephen G.

    The utility of modified item response theory (IRT) models in small sample testing applications was studied. The modified IRT models were modifications of the one- and two-parameter logistic models. One-, two-, and three-parameter models were also studied. Test data were from 4 years of a national certification examination for persons desiring…

  19. An inexact reverse logistics model for municipal solid waste management systems.

    PubMed

    Zhang, Yi Mei; Huang, Guo He; He, Li

    2011-03-01

    This paper proposed an inexact reverse logistics model for municipal solid waste management systems (IRWM). Waste managers, suppliers, industries and distributors were involved in strategic planning and operational execution through reverse logistics management. All the parameters were assumed to be intervals to quantify the uncertainties in the optimization process and solutions in IRWM. To solve this model, a piecewise interval programming was developed to deal with Min-Min functions in both objectives and constraints. The application of the model was illustrated through a classical municipal solid waste management case. With different cost parameters for landfill and the WTE, two scenarios were analyzed. The IRWM could reflect the dynamic and uncertain characteristics of MSW management systems, and could facilitate the generation of desired management plans. The model could be further advanced through incorporating methods of stochastic or fuzzy parameters into its framework. Design of multi-waste, multi-echelon, multi-uncertainty reverse logistics model for waste management network would also be preferred. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. The use of the logistic model in space motion sickness prediction

    NASA Technical Reports Server (NTRS)

    Lin, Karl K.; Reschke, Millard F.

    1987-01-01

    The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.

  1. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    ERIC Educational Resources Information Center

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  2. The use of generalized estimating equations in the analysis of motor vehicle crash data.

    PubMed

    Hutchings, Caroline B; Knight, Stacey; Reading, James C

    2003-01-01

    The purpose of this study was to determine if it is necessary to use generalized estimating equations (GEEs) in the analysis of seat belt effectiveness in preventing injuries in motor vehicle crashes. The 1992 Utah crash dataset was used, excluding crash participants where seat belt use was not appropriate (n=93,633). The model used in the 1996 Report to Congress [Report to congress on benefits of safety belts and motorcycle helmets, based on data from the Crash Outcome Data Evaluation System (CODES). National Center for Statistics and Analysis, NHTSA, Washington, DC, February 1996] was analyzed for all occupants with logistic regression, one level of nesting (occupants within crashes), and two levels of nesting (occupants within vehicles within crashes) to compare the use of GEEs with logistic regression. When using one level of nesting compared to logistic regression, 13 of 16 variance estimates changed more than 10%, and eight of 16 parameter estimates changed more than 10%. In addition, three of the independent variables changed from significant to insignificant (alpha=0.05). With the use of two levels of nesting, two of 16 variance estimates and three of 16 parameter estimates changed more than 10% from the variance and parameter estimates in one level of nesting. One of the independent variables changed from insignificant to significant (alpha=0.05) in the two levels of nesting model; therefore, only two of the independent variables changed from significant to insignificant when the logistic regression model was compared to the two levels of nesting model. The odds ratio of seat belt effectiveness in preventing injuries was 12% lower when a one-level nested model was used. Based on these results, we stress the need to use a nested model and GEEs when analyzing motor vehicle crash data.

  3. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  4. On Interpreting the Parameters for Any Item Response Model

    ERIC Educational Resources Information Center

    Thissen, David

    2009-01-01

    Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…

  5. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 2: Mission payloads subsystem description

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  6. A Note on the Item Information Function of the Four-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  7. Two-echelon logistics service supply chain decision game considering quality supervision

    NASA Astrophysics Data System (ADS)

    Shi, Jiaying

    2017-10-01

    Due to the increasing importance of supply chain logistics service, we established the Stackelberg game model between single integrator and single subcontractors under decentralized and centralized circumstances, and found that logistics services integrators as a leader prefer centralized decision-making but logistics service subcontractors tend to the decentralized decision-making. Then, we further analyzed why subcontractor chose to deceive and rebuilt a principal-agent game model to monitor the logistics services quality of them. Mixed Strategy Nash equilibrium and related parameters were discussed. The results show that strengthening the supervision and coordination can improve the quality level of logistics service supply chain.

  8. Fitting Item Response Theory Models to Two Personality Inventories: Issues and Insights.

    PubMed

    Chernyshenko, O S; Stark, S; Chan, K Y; Drasgow, F; Williams, B

    2001-10-01

    The present study compared the fit of several IRT models to two personality assessment instruments. Data from 13,059 individuals responding to the US-English version of the Fifth Edition of the Sixteen Personality Factor Questionnaire (16PF) and 1,770 individuals responding to Goldberg's 50 item Big Five Personality measure were analyzed. Various issues pertaining to the fit of the IRT models to personality data were considered. We examined two of the most popular parametric models designed for dichotomously scored items (i.e., the two- and three-parameter logistic models) and a parametric model for polytomous items (Samejima's graded response model). Also examined were Levine's nonparametric maximum likelihood formula scoring models for dichotomous and polytomous data, which were previously found to provide good fits to several cognitive ability tests (Drasgow, Levine, Tsien, Williams, & Mead, 1995). The two- and three-parameter logistic models fit some scales reasonably well but not others; the graded response model generally did not fit well. The nonparametric formula scoring models provided the best fit of the models considered. Several implications of these findings for personality measurement and personnel selection were described.

  9. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  10. A novel hybrid method of beta-turn identification in protein using binary logistic regression and neural network

    PubMed Central

    Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz

    2012-01-01

    From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins. PMID:27418910

  11. A novel hybrid method of beta-turn identification in protein using binary logistic regression and neural network.

    PubMed

    Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz

    2012-01-01

    From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.

  12. On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis

    ERIC Educational Resources Information Center

    Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas

    2011-01-01

    The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…

  13. Density-dependence as a size-independent regulatory mechanism.

    PubMed

    de Vladar, Harold P

    2006-01-21

    The growth function of populations is central in biomathematics. The main dogma is the existence of density-dependence mechanisms, which can be modelled with distinct functional forms that depend on the size of the population. One important class of regulatory functions is the theta-logistic, which generalizes the logistic equation. Using this model as a motivation, this paper introduces a simple dynamical reformulation that generalizes many growth functions. The reformulation consists of two equations, one for population size, and one for the growth rate. Furthermore, the model shows that although population is density-dependent, the dynamics of the growth rate does not depend either on population size, nor on the carrying capacity. Actually, the growth equation is uncoupled from the population size equation, and the model has only two parameters, a Malthusian parameter rho and a competition coefficient theta. Distinct sign combinations of these parameters reproduce not only the family of theta-logistics, but also the van Bertalanffy, Gompertz and Potential Growth equations, among other possibilities. It is also shown that, except for two critical points, there is a general size-scaling relation that includes those appearing in the most important allometric theories, including the recently proposed Metabolic Theory of Ecology. With this model, several issues of general interest are discussed such as the growth of animal population, extinctions, cell growth and allometry, and the effect of environment over a population.

  14. Comparing the IRT Pre-equating and Section Pre-equating: A Simulation Study.

    ERIC Educational Resources Information Center

    Hwang, Chi-en; Cleary, T. Anne

    The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…

  15. Sourcing for Parameter Estimation and Study of Logistic Differential Equation

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2012-01-01

    This article offers modelling opportunities in which the phenomena of the spread of disease, perception of changing mass, growth of technology, and dissemination of information can be described by one differential equation--the logistic differential equation. It presents two simulation activities for students to generate real data, as well as…

  16. Binomial outcomes in dataset with some clusters of size two: can the dependence of twins be accounted for? A simulation study comparing the reliability of statistical methods based on a dataset of preterm infants.

    PubMed

    Sauzet, Odile; Peacock, Janet L

    2017-07-20

    The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.

  17. Process model comparison and transferability across bioreactor scales and modes of operation for a mammalian cell bioprocess.

    PubMed

    Craven, Stephen; Shirsat, Nishikant; Whelan, Jessica; Glennon, Brian

    2013-01-01

    A Monod kinetic model, logistic equation model, and statistical regression model were developed for a Chinese hamster ovary cell bioprocess operated under three different modes of operation (batch, bolus fed-batch, and continuous fed-batch) and grown on two different bioreactor scales (3 L bench-top and 15 L pilot-scale). The Monod kinetic model was developed for all modes of operation under study and predicted cell density, glucose glutamine, lactate, and ammonia concentrations well for the bioprocess. However, it was computationally demanding due to the large number of parameters necessary to produce a good model fit. The transferability of the Monod kinetic model structure and parameter set across bioreactor scales and modes of operation was investigated and a parameter sensitivity analysis performed. The experimentally determined parameters had the greatest influence on model performance. They changed with scale and mode of operation, but were easily calculated. The remaining parameters, which were fitted using a differential evolutionary algorithm, were not as crucial. Logistic equation and statistical regression models were investigated as alternatives to the Monod kinetic model. They were less computationally intensive to develop due to the absence of a large parameter set. However, modeling of the nutrient and metabolite concentrations proved to be troublesome due to the logistic equation model structure and the inability of both models to incorporate a feed. The complexity, computational load, and effort required for model development has to be balanced with the necessary level of model sophistication when choosing which model type to develop for a particular application. Copyright © 2012 American Institute of Chemical Engineers (AIChE).

  18. Designing a capacitated multi-configuration logistics network under disturbances and parameter uncertainty: a real-world case of a drug supply chain

    NASA Astrophysics Data System (ADS)

    Shishebori, Davood; Babadi, Abolghasem Yousefi

    2018-03-01

    This study investigates the reliable multi-configuration capacitated logistics network design problem (RMCLNDP) under system disturbances, which relates to locating facilities, establishing transportation links, and also allocating their limited capacities to the customers conducive to provide their demand on the minimum expected total cost (including locating costs, link constructing costs, and also expected costs in normal and disturbance conditions). In addition, two types of risks are considered; (I) uncertain environment, (II) system disturbances. A two-level mathematical model is proposed for formulating of the mentioned problem. Also, because of the uncertain parameters of the model, an efficacious possibilistic robust optimization approach is utilized. To evaluate the model, a drug supply chain design (SCN) is studied. Finally, an extensive sensitivity analysis was done on the critical parameters. The obtained results show that the efficiency of the proposed approach is suitable and is worthwhile for analyzing the real practical problems.

  19. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  20. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  1. Construction of a Computerized Adaptive Testing Version of the Quebec Adaptive Behavior Scale.

    ERIC Educational Resources Information Center

    Tasse, Marc J.; And Others

    Multilog (Thissen, 1991) was used to estimate parameters of 225 items from the Quebec Adaptive Behavior Scale (QABS). A database containing actual data from 2,439 subjects was used for the parameterization procedures. The two-parameter-logistic model was used in estimating item parameters and in the testing strategy. MicroCAT (Assessment Systems…

  2. Stochastic dynamics and logistic population growth

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Assaf, Michael; Campos, Daniel; Horsthemke, Werner

    2015-06-01

    The Verhulst model is probably the best known macroscopic rate equation in population ecology. It depends on two parameters, the intrinsic growth rate and the carrying capacity. These parameters can be estimated for different populations and are related to the reproductive fitness and the competition for limited resources, respectively. We investigate analytically and numerically the simplest possible microscopic scenarios that give rise to the logistic equation in the deterministic mean-field limit. We provide a definition of the two parameters of the Verhulst equation in terms of microscopic parameters. In addition, we derive the conditions for extinction or persistence of the population by employing either the momentum-space spectral theory or the real-space Wentzel-Kramers-Brillouin approximation to determine the probability distribution function and the mean time to extinction of the population. Our analytical results agree well with numerical simulations.

  3. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  4. Ability Estimation and Item Calibration Using the One and Three Parameter Logistic Models: A Comparative Study. Research Report 77-1.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…

  5. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  6. Reducing the Dynamical Degradation by Bi-Coupling Digital Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Liu, Lingfeng; Liu, Bocheng; Hu, Hanping; Miao, Suoxia

    A chaotic map which is realized on a computer will suffer dynamical degradation. Here, a coupled chaotic model is proposed to reduce the dynamical degradation. In this model, the state variable of one digital chaotic map is used to control the parameter of the other digital map. This coupled model is universal and can be used for all chaotic maps. In this paper, two coupled models (one is coupled by two logistic maps, the other is coupled by Chebyshev map and Baker map) are performed, and the numerical experiments show that the performances of these two coupled chaotic maps are greatly improved. Furthermore, a simple pseudorandom bit generator (PRBG) based on coupled digital logistic maps is proposed as an application for our method.

  7. Modeling the Risk of Radiation-Induced Acute Esophagitis for Combined Washington University and RTOG Trial 93-11 Lung Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Ellen X.; Bradley, Jeffrey D.; El Naqa, Issam

    2012-04-01

    Purpose: To construct a maximally predictive model of the risk of severe acute esophagitis (AE) for patients who receive definitive radiation therapy (RT) for non-small-cell lung cancer. Methods and Materials: The dataset includes Washington University and RTOG 93-11 clinical trial data (events/patients: 120/374, WUSTL = 101/237, RTOG9311 = 19/137). Statistical model building was performed based on dosimetric and clinical parameters (patient age, sex, weight loss, pretreatment chemotherapy, concurrent chemotherapy, fraction size). A wide range of dose-volume parameters were extracted from dearchived treatment plans, including Dx, Vx, MOHx (mean of hottest x% volume), MOCx (mean of coldest x% volume), and gEUDmore » (generalized equivalent uniform dose) values. Results: The most significant single parameters for predicting acute esophagitis (RTOG Grade 2 or greater) were MOH85, mean esophagus dose (MED), and V30. A superior-inferior weighted dose-center position was derived but not found to be significant. Fraction size was found to be significant on univariate logistic analysis (Spearman R = 0.421, p < 0.00001) but not multivariate logistic modeling. Cross-validation model building was used to determine that an optimal model size needed only two parameters (MOH85 and concurrent chemotherapy, robustly selected on bootstrap model-rebuilding). Mean esophagus dose (MED) is preferred instead of MOH85, as it gives nearly the same statistical performance and is easier to compute. AE risk is given as a logistic function of (0.0688 Asterisk-Operator MED+1.50 Asterisk-Operator ConChemo-3.13), where MED is in Gy and ConChemo is either 1 (yes) if concurrent chemotherapy was given, or 0 (no). This model correlates to the observed risk of AE with a Spearman coefficient of 0.629 (p < 0.000001). Conclusions: Multivariate statistical model building with cross-validation suggests that a two-variable logistic model based on mean dose and the use of concurrent chemotherapy robustly predicts acute esophagitis risk in combined-data WUSTL and RTOG 93-11 trial datasets.« less

  8. Logistic regression of family data from retrospective study designs.

    PubMed

    Whittemore, Alice S; Halpern, Jerry

    2003-11-01

    We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.

  9. A Comparison of the Fit of Empirical Data to Two Latent Trait Models. Report No. 92.

    ERIC Educational Resources Information Center

    Hutten, Leah R.

    Goodness of fit of raw test score data were compared, using two latent trait models: the Rasch model and the Birnbaum three-parameter logistic model. Data were taken from various achievement tests and the Scholastic Aptitude Test (Verbal). A minimum sample size of 1,000 was required, and the minimum test length was 40 items. Results indicated that…

  10. Predicting risk for portal vein thrombosis in acute pancreatitis patients: A comparison of radical basis function artificial neural network and logistic regression models.

    PubMed

    Fei, Yang; Hu, Jian; Gao, Kun; Tu, Jianfeng; Li, Wei-Qin; Wang, Wei

    2017-06-01

    To construct a radical basis function (RBF) artificial neural networks (ANNs) model to predict the incidence of acute pancreatitis (AP)-induced portal vein thrombosis. The analysis included 353 patients with AP who had admitted between January 2011 and December 2015. RBF ANNs model and logistic regression model were constructed based on eleven factors relevant to AP respectively. Statistical indexes were used to evaluate the value of the prediction in two models. The predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by RBF ANNs model for PVT were 73.3%, 91.4%, 68.8%, 93.0% and 87.7%, respectively. There were significant differences between the RBF ANNs and logistic regression models in these parameters (P<0.05). In addition, a comparison of the area under receiver operating characteristic curves of the two models showed a statistically significant difference (P<0.05). The RBF ANNs model is more likely to predict the occurrence of PVT induced by AP than logistic regression model. D-dimer, AMY, Hct and PT were important prediction factors of approval for AP-induced PVT. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Age and growth parameters of shark-like batoids.

    PubMed

    White, J; Simpfendorfer, C A; Tobin, A J; Heupel, M R

    2014-05-01

    Estimates of life-history parameters were made for shark-like batoids of conservation concern Rhynchobatus spp. (Rhynchobatus australiae, Rhynchobatus laevis and Rhynchobatus palpebratus) and Glaucostegus typus using vertebral ageing. The sigmoid growth functions, Gompertz and logistic, best described the growth of Rhynchobatus spp. and G. typus, providing the best statistical fit and most biologically appropriate parameters. The two-parameter logistic was the preferred model for Rhynchobatus spp. with growth parameter estimates (both sexes combined) L(∞) = 2045 mm stretch total length, LST and k = 0·41 year⁻¹. The same model was also preferred for G. typus with growth parameter estimates (both sexes combined) L∞  = 2770 mm LST and k = 0·30 year⁻¹. Annual growth-band deposition could not be excluded in Rhynchobatus spp. using mark-recaptured individuals. Although morphologically similar G. typus and Rhynchobatus spp. have differing life histories, with G. typus longer lived, slower growing and attaining a larger maximum size. © 2014 The Fisheries Society of the British Isles.

  12. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  13. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defraene, Gilles, E-mail: gilles.defraene@uzleuven.be; Van den Bergh, Laura; Al-Mamgani, Abrahim

    2012-03-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including themore » most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable prediction models were obtained with LKB, RS, and logistic NTCP models. Including clinical factors improved the predictive power of all models significantly.« less

  14. Linear Logistic Test Modeling with R

    ERIC Educational Resources Information Center

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  15. A Short Note on Estimating the Testlet Model with Different Estimators in Mplus

    ERIC Educational Resources Information Center

    Luo, Yong

    2018-01-01

    Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…

  16. Confirming the validity of the CONUT system for early detection and monitoring of clinical undernutrition: comparison with two logistic regression models developed using SGA as the gold standard.

    PubMed

    González-Madroño, A; Mancha, A; Rodríguez, F J; Culebras, J; de Ulibarri, J I

    2012-01-01

    To ratify previous validations of the CONUT nutritional screening tool by the development of two probabilistic models using the parameters included in the CONUT, to see if the CONUT´s effectiveness could be improved. It is a two step prospective study. In Step 1, 101 patients were randomly selected, and SGA and CONUT was made. With data obtained an unconditional logistic regression model was developed, and two variants of CONUT were constructed: Model 1 was made by a method of logistic regression. Model 2 was made by dividing the probabilities of undernutrition obtained in model 1 in seven regular intervals. In step 2, 60 patients were selected and underwent the SGA, the original CONUT and the new models developed. The diagnostic efficacy of the original CONUT and the new models was tested by means of ROC curves. Both samples 1 and 2 were put together to measure the agreement degree between the original CONUT and SGA, and diagnostic efficacy parameters were calculated. No statistically significant differences were found between sample 1 and 2, regarding age, sex and medical/surgical distribution and undernutrition rates were similar (over 40%). The AUC for the ROC curves were 0.862 for the original CONUT, and 0.839 and 0.874, for model 1 and 2 respectively. The kappa index for the CONUT and SGA was 0.680. The CONUT, with the original scores assigned by the authors is equally good than mathematical models and thus is a valuable tool, highly useful and efficient for the purpose of Clinical Undernutrition screening.

  17. Binary logistic regression-Instrument for assessing museum indoor air impact on exhibits.

    PubMed

    Bucur, Elena; Danet, Andrei Florin; Lehr, Carol Blaziu; Lehr, Elena; Nita-Lazar, Mihai

    2017-04-01

    This paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The prediction of the impact on the exhibits during certain pollution scenarios (environmental impact) was calculated by a mathematical model based on the binary logistic regression; it allows the identification of those environmental parameters from a multitude of possible parameters with a significant impact on exhibitions and ranks them according to their severity effect. Air quality (NO 2 , SO 2 , O 3 and PM 2.5 ) and microclimate parameters (temperature, humidity) monitoring data from a case study conducted within exhibition and storage spaces of the Romanian National Aviation Museum Bucharest have been used for developing and validating the binary logistic regression method and the mathematical model. The logistic regression analysis was used on 794 data combinations (715 to develop of the model and 79 to validate it) by a Statistical Package for Social Sciences (SPSS 20.0). The results from the binary logistic regression analysis demonstrated that from six parameters taken into consideration, four of them present a significant effect upon exhibits in the following order: O 3 >PM 2.5 >NO 2 >humidity followed at a significant distance by the effects of SO 2 and temperature. The mathematical model, developed in this study, correctly predicted 95.1 % of the cumulated effect of the environmental parameters upon the exhibits. Moreover, this model could also be used in the decisional process regarding the preventive preservation measures that should be implemented within the exhibition space. The paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The mathematical model developed on the environmental parameters analyzed by the binary logistic regression method could be useful in a decision-making process establishing the best measures for pollution reduction and preventive preservation of exhibits.

  18. Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model

    ERIC Educational Resources Information Center

    Von Davier, Matthias; Yamamoto, Kentaro

    2004-01-01

    The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…

  19. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  20. Derivation of the linear-logistic model and Cox's proportional hazard model from a canonical system description.

    PubMed

    Voit, E O; Knapp, R G

    1997-08-15

    The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.

  1. Analysis Test of Understanding of Vectors with the Three-Parameter Logistic Model of Item Response Theory and Item Response Curves Technique

    ERIC Educational Resources Information Center

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-01-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…

  2. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  3. The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory

    ERIC Educational Resources Information Center

    Anil, Duygu

    2008-01-01

    In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…

  4. Limits on Log Cross-Product Ratios for Item Response Models. Research Report. ETS RR-06-10

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Holland, Paul W.; Sinharay, Sandip

    2006-01-01

    Bounds are established for log cross-product ratios (log odds ratios) involving pairs of items for item response models. First, expressions for bounds on log cross-product ratios are provided for unidimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model.…

  5. Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory

    ERIC Educational Resources Information Center

    Wells, Craig S.; Bolt, Daniel M.

    2008-01-01

    Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…

  6. Bayesian Analysis of Item Response Curves. Research Report 84-1. Mathematical Sciences Technical Report No. 132.

    ERIC Educational Resources Information Center

    Tsutakawa, Robert K.; Lin, Hsin Ying

    Item response curves for a set of binary responses are studied from a Bayesian viewpoint of estimating the item parameters. For the two-parameter logistic model with normally distributed ability, restricted bivariate beta priors are used to illustrate the computation of the posterior mode via the EM algorithm. The procedure is illustrated by data…

  7. Item Parameter Invariance of the Kaufman Adolescent and Adult Intelligence Test across Male and Female Samples

    ERIC Educational Resources Information Center

    Immekus, Jason C.; Maller, Susan J.

    2009-01-01

    The Kaufman Adolescent and Adult Intelligence Test (KAIT[TM]) is an individually administered test of intelligence for individuals ranging in age from 11 to 85+ years. The item response theory-likelihood ratio procedure, based on the two-parameter logistic model, was used to detect differential item functioning (DIF) in the KAIT across males and…

  8. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  9. Application of logistic regression to case-control association studies involving two causative loci.

    PubMed

    North, Bernard V; Curtis, David; Sham, Pak C

    2005-01-01

    Models in which two susceptibility loci jointly influence the risk of developing disease can be explored using logistic regression analysis. Comparison of likelihoods of models incorporating different sets of disease model parameters allows inferences to be drawn regarding the nature of the joint effect of the loci. We have simulated case-control samples generated assuming different two-locus models and then analysed them using logistic regression. We show that this method is practicable and that, for the models we have used, it can be expected to allow useful inferences to be drawn from sample sizes consisting of hundreds of subjects. Interactions between loci can be explored, but interactive effects do not exactly correspond with classical definitions of epistasis. We have particularly examined the issue of the extent to which it is helpful to utilise information from a previously identified locus when investigating a second, unknown locus. We show that for some models conditional analysis can have substantially greater power while for others unconditional analysis can be more powerful. Hence we conclude that in general both conditional and unconditional analyses should be performed when searching for additional loci.

  10. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  11. To Use or Not to Use--(The One- or Three-Parameter Logistic Model) That Is the Question.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    Definition of the issues to the use of latent trait models, specifically one- and three-parameter logistic models, in conjunction with multi-level achievement batteries, forms the basis of this paper. Research results related to these issues are also documented in an attempt to provide a rational basis for model selection. The application of the…

  12. Item Vector Plots for the Multidimensional Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Bryant, Damon; Davis, Larry

    2011-01-01

    This brief technical note describes how to construct item vector plots for dichotomously scored items fitting the multidimensional three-parameter logistic model (M3PLM). As multidimensional item response theory (MIRT) shows promise of being a very useful framework in the test development life cycle, graphical tools that facilitate understanding…

  13. Semiparametric Item Response Functions in the Context of Guessing

    ERIC Educational Resources Information Center

    Falk, Carl F.; Cai, Li

    2016-01-01

    We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…

  14. Selected aspects of prior and likelihood information for a Bayesian classifier in a road safety analysis.

    PubMed

    Nowakowska, Marzena

    2017-04-01

    The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Integrated Logistics Support Analysis of the International Space Station Alpha, Background and Summary of Mathematical Modeling and Failure Density Distributions Pertaining to Maintenance Time Dependent Parameters

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.

  16. Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844

    ERIC Educational Resources Information Center

    Falk, Carl F.; Cai, Li

    2015-01-01

    We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…

  17. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  18. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  19. An Evaluation of Three Approximate Item Response Theory Models for Equating Test Scores.

    ERIC Educational Resources Information Center

    Marco, Gary L.; And Others

    Three item response models were evaluated for estimating item parameters and equating test scores. The models, which approximated the traditional three-parameter model, included: (1) the Rasch one-parameter model, operationalized in the BICAL computer program; (2) an approximate three-parameter logistic model based on coarse group data divided…

  20. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  1. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  2. Computerized Classification Testing under the One-Parameter Logistic Response Model with Ability-Based Guessing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Huang, Sheng-Yun

    2011-01-01

    The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…

  3. Chaotic and stable perturbed maps: 2-cycles and spatial models

    NASA Astrophysics Data System (ADS)

    Braverman, E.; Haroutunian, J.

    2010-06-01

    As the growth rate parameter increases in the Ricker, logistic and some other maps, the models exhibit an irreversible period doubling route to chaos. If a constant positive perturbation is introduced, then the Ricker model (but not the classical logistic map) experiences period doubling reversals; the break of chaos finally gives birth to a stable two-cycle. We outline the maps which demonstrate a similar behavior and also study relevant discrete spatial models where the value in each cell at the next step is defined only by the values at the cell and its nearest neighbors. The stable 2-cycle in a scalar map does not necessarily imply 2-cyclic-type behavior in each cell for the spatial generalization of the map.

  4. Exploring unobserved heterogeneity in bicyclists' red-light running behaviors at different crossing facilities.

    PubMed

    Guo, Yanyong; Li, Zhibin; Wu, Yao; Xu, Chengcheng

    2018-06-01

    Bicyclists running the red light at crossing facilities increase the potential of colliding with motor vehicles. Exploring the contributing factors could improve the prediction of running red-light probability and develop countermeasures to reduce such behaviors. However, individuals could have unobserved heterogeneities in running a red light, which make the accurate prediction more challenging. Traditional models assume that factor parameters are fixed and cannot capture the varying impacts on red-light running behaviors. In this study, we employed the full Bayesian random parameters logistic regression approach to account for the unobserved heterogeneous effects. Two types of crossing facilities were considered which were the signalized intersection crosswalks and the road segment crosswalks. Electric and conventional bikes were distinguished in the modeling. Data were collected from 16 crosswalks in urban area of Nanjing, China. Factors such as individual characteristics, road geometric design, environmental features, and traffic variables were examined. Model comparison indicates that the full Bayesian random parameters logistic regression approach is statistically superior to the standard logistic regression model. More red-light runners are predicted at signalized intersection crosswalks than at road segment crosswalks. Factors affecting red-light running behaviors are gender, age, bike type, road width, presence of raised median, separation width, signal type, green ratio, bike and vehicle volume, and average vehicle speed. Factors associated with the unobserved heterogeneity are gender, bike type, signal type, separation width, and bike volume. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Satellite rainfall retrieval by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.

    1986-01-01

    The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.

  6. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  7. Item Response Theory Modeling of the Philadelphia Naming Test.

    PubMed

    Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D

    2015-06-01

    In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.

  8. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes.

    PubMed

    Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel

    2011-05-23

    Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.

  9. Mathematical circulatory system model

    NASA Technical Reports Server (NTRS)

    Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)

    2010-01-01

    A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.

  10. Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wu, Yan

    2018-03-01

    Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.

  11. Mixture Rasch model for guessing group identification

    NASA Astrophysics Data System (ADS)

    Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling

    2013-04-01

    Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.

  12. Growth curves for ostriches (Struthio camelus) in a Brazilian population.

    PubMed

    Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P

    2013-01-01

    The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.

  13. An Application of a Multidimensional Extension of the Two-Parameter Logistic Latent Trait Model.

    DTIC Science & Technology

    1983-08-01

    theory, models, technical issues, and applications. Review of Educational Research, 1978, 48, 467-510. Marco, G. L. Item characteristic curve...solutions to three intractable testing problems. Journal of Educational Measurement, 1977, 14, 139-160. McKinley, R. L. and Reckase, M. D. A successful...application of latent trait theory to tailored achievement testing (Research Report 80-1). Columbia: University of Missouri, Department of Educational

  14. Modeling of the devolatilization kinetics during pyrolysis of grape residues.

    PubMed

    Fiori, Luca; Valbusa, Michele; Lorenzi, Denis; Fambri, Luca

    2012-01-01

    Thermo-gravimetric analysis (TGA) was performed on grape seeds, skins, stalks, marc, vine-branches, grape seed oil and grape seeds depleted of their oil. The TGA data was modeled through Gaussian, logistic and Miura-Maki distributed activation energy models (DAEMs) and a simpler two-parameter model. All DAEMs allowed an accurate prediction of the TGA data; however, the Miura-Maki model could not account for the complete range of conversion for some substrates, while the Gaussian and logistic DAEMs suffered from the interrelation between the pre-exponential factor k0 and the mean activation energy E0--an obstacle that can be overcome by fixing the value of k0 a priori. The results confirmed the capabilities of DAEMs but also highlighted some drawbacks in their application to certain thermodegradation experimental data. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.

    PubMed

    Zhang, Jianguang; Jiang, Jianmin

    2018-02-01

    While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.

  16. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  17. A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors

    ERIC Educational Resources Information Center

    Miyazaki, Kei; Hoshino, Takahiro

    2009-01-01

    In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…

  18. Score Equating and Item Response Theory: Some Practical Considerations.

    ERIC Educational Resources Information Center

    Cook, Linda L.; Eignor, Daniel R.

    The purposes of this paper are five-fold to discuss: (1) when item response theory (IRT) equating methods should provide better results than traditional methods; (2) which IRT model, the three-parameter logistic or the one-parameter logistic (Rasch), is the most reasonable to use; (3) what unique contributions IRT methods can offer the equating…

  19. A probabilistic cellular automata model for the dynamics of a population driven by logistic growth and weak Allee effect

    NASA Astrophysics Data System (ADS)

    Mendonça, J. R. G.

    2018-04-01

    We propose and investigate a one-parameter probabilistic mixture of one-dimensional elementary cellular automata under the guise of a model for the dynamics of a single-species unstructured population with nonoverlapping generations in which individuals have smaller probability of reproducing and surviving in a crowded neighbourhood but also suffer from isolation and dispersal. Remarkably, the first-order mean field approximation to the dynamics of the model yields a cubic map containing terms representing both logistic and weak Allee effects. The model has a single absorbing state devoid of individuals, but depending on the reproduction and survival probabilities can achieve a stable population. We determine the critical probability separating these two phases and find that the phase transition between them is in the directed percolation universality class of critical behaviour.

  20. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  2. A Primer on the 2- and 3-Parameter Item Response Theory Models.

    ERIC Educational Resources Information Center

    Thornton, Artist

    Item response theory (IRT) is a useful and effective tool for item response measurement if used in the proper context. This paper discusses the sets of assumptions under which responses can be modeled while exploring the framework of the IRT models relative to response testing. The one parameter model, or one parameter logistic model, is perhaps…

  3. Development of a subway operation incident delay model using accelerated failure time approaches.

    PubMed

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes

    PubMed Central

    2011-01-01

    Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357

  5. Modeling the dynamics of urban growth using multinomial logistic regression: a case study of Jiayu County, Hubei Province, China

    NASA Astrophysics Data System (ADS)

    Nong, Yu; Du, Qingyun; Wang, Kun; Miao, Lei; Zhang, Weiwei

    2008-10-01

    Urban growth modeling, one of the most important aspects of land use and land cover change study, has attracted substantial attention because it helps to comprehend the mechanisms of land use change thus helps relevant policies made. This study applied multinomial logistic regression to model urban growth in the Jiayu county of Hubei province, China to discover the relationship between urban growth and the driving forces of which biophysical and social-economic factors are selected as independent variables. This type of regression is similar to binary logistic regression, but it is more general because the dependent variable is not restricted to two categories, as those previous studies did. The multinomial one can simulate the process of multiple land use competition between urban land, bare land, cultivated land and orchard land. Taking the land use type of Urban as reference category, parameters could be estimated with odds ratio. A probability map is generated from the model to predict where urban growth will occur as a result of the computation.

  6. Item Response Theory Analyses of Parent and Teacher Ratings of the ADHD Symptoms for Recoded Dichotomous Scores

    ERIC Educational Resources Information Center

    Gomez, Rapson; Vance, Alasdair; Gomez, Andre

    2011-01-01

    Objective: The two-parameter logistic model (2PLM) was used to evaluate the psychometric properties of the inattention (IA) and hyperactivity/impulsivity (HI) symptoms. Method: To accomplish this, parents and teachers completed the Disruptive Behavior Rating Scale (DBRS) for a group of 934 primary school-aged children. Results: The results for the…

  7. The Shortened Raven Standard Progressive Matrices: Item Response Theory-Based Psychometric Analyses and Normative Data

    ERIC Educational Resources Information Center

    Van der Elst, Wim; Ouwehand, Carolijn; van Rijn, Peter; Lee, Nikki; Van Boxtel, Martin; Jolles, Jelle

    2013-01-01

    The purpose of the present study was to evaluate the psychometric properties of a shortened version of the Raven Standard Progressive Matrices (SPM) under an item response theory framework (the one- and two-parameter logistic models). The shortened Raven SPM was administered to N = 453 cognitively healthy adults aged between 24 and 83 years. The…

  8. Person Response Functions and the Definition of Units in the Social Sciences

    ERIC Educational Resources Information Center

    Engelhard, George, Jr.; Perkins, Aminah F.

    2011-01-01

    Humphry (this issue) has written a thought-provoking piece on the interpretation of item discrimination parameters as scale units in item response theory. One of the key features of his work is the description of an item response theory (IRT) model that he calls the logistic measurement function that combines aspects of two traditions in IRT that…

  9. Evaluation of Linking Methods for Placing Three-Parameter Logistic Item Parameter Estimates onto a One-Parameter Scale

    ERIC Educational Resources Information Center

    Karkee, Thakur B.; Wright, Karen R.

    2004-01-01

    Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…

  10. Comment on ``Correlated noise in a logistic growth model''

    NASA Astrophysics Data System (ADS)

    Behera, Anita; O'Rourke, S. Francesca C.

    2008-01-01

    We argue that the results published by Ai [Phys. Rev. E 67, 022903 (2003)] on “correlated noise in logistic growth” are not correct. Their conclusion that, for larger values of the correlation parameter λ , the cell population is peaked at x=0 , which denotes a high extinction rate, is also incorrect. We find the reverse behavior to their results, that increasing λ promotes the stable growth of tumor cells. In particular, their results for the steady-state probability, as a function of cell number, at different correlation strengths, presented in Figs. 1 and 2 of their paper show different behavior than one would expect from the simple mathematical expression for the steady-state probability. Additionally, their interpretation that at small values of cell number the steady-state probability increases as the correlation parameter is increased is also questionable. Another striking feature in their Figs. 1 and 3 is that, for the same values of the parameters λ and α , their simulation produces two different curves, both qualitatively and quantitatively.

  11. Fungible weights in logistic regression.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Modelling the growth of plants with a uniform growth logistics.

    PubMed

    Kilian, H G; Bartkowiak, D; Kazda, M; Kaufmann, D

    2014-05-21

    The increment model has previously been used to describe the growth of plants in general. Here, we examine how the same logistics enables the development of different superstructures. Data from the literature are analyzed with the increment model. Increments are growth-invariant molecular clusters, treated as heuristic particles. This approach formulates the law of mass action for multi-component systems, describing the general properties of superstructures which are optimized via relaxation processes. The daily growth patterns of hypocotyls can be reproduced implying predetermined growth invariant model parameters. In various species, the coordinated formation and death of fine roots are modeled successfully. Their biphasic annual growth follows distinct morphological programs but both use the same logistics. In tropical forests, distributions of the diameter in breast height of trees of different species adhere to the same pattern. Beyond structural fluctuations, competition and cooperation within and between the species may drive optimization. All superstructures of plants examined so far could be reproduced with our approach. With genetically encoded growth-invariant model parameters (interaction with the environment included) perfect morphological development runs embedded in the uniform logistics of the increment model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. The compression of deaths above the mode.

    PubMed

    Thatcher, A Roger; Cheung, Siu Lan K; Horiuchi, Shiro; Robine, Jean-Marie

    2010-03-26

    Kannisto (2001) has shown that as the frequency distribution of ages at death has shifted to the right, the age distribution of deaths above the modal age has become more compressed. In order to further investigate this old-age mortality compression, we adopt the simple logistic model with two parameters, which is known to fit data on old-age mortality well (Thatcher 1999). Based on the model, we show that three key measures of old-age mortality (the modal age of adult deaths, the life expectancy at the modal age, and the standard deviation of ages at death above the mode) can be estimated fairly accurately from death rates at only two suitably chosen high ages (70 and 90 in this study). The distribution of deaths above the modal age becomes compressed when the logits of death rates fall more at the lower age than at the higher age. Our analysis of mortality time series in six countries, using the logistic model, endorsed Kannisto's conclusion. Some possible reasons for the compression are discussed.

  14. Transport spatial model for the definition of green routes for city logistics centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pamučar, Dragan, E-mail: dpamucar@gmail.com; Gigović, Ljubomir, E-mail: gigoviclj@gmail.com; Ćirović, Goran, E-mail: cirovic@sezampro.rs

    This paper presents a transport spatial decision support model (TSDSM) for carrying out the optimization of green routes for city logistics centers. The TSDSM model is based on the integration of the multi-criteria method of Weighted Linear Combination (WLC) and the modified Dijkstra algorithm within a geographic information system (GIS). The GIS is used for processing spatial data. The proposed model makes it possible to plan routes for green vehicles and maximize the positive effects on the environment, which can be seen in the reduction of harmful gas emissions and an increase in the air quality in highly populated areas.more » The scheduling of delivery vehicles is given as a problem of optimization in terms of the parameters of: the environment, health, use of space and logistics operating costs. Each of these input parameters was thoroughly examined and broken down in the GIS into criteria which further describe them. The model presented here takes into account the fact that logistics operators have a limited number of environmentally friendly (green) vehicles available. The TSDSM was tested on a network of roads with 127 links for the delivery of goods from the city logistics center to the user. The model supports any number of available environmentally friendly or environmentally unfriendly vehicles consistent with the size of the network and the transportation requirements. - Highlights: • Model for routing light delivery vehicles in urban areas. • Optimization of green routes for city logistics centers. • The proposed model maximizes the positive effects on the environment. • The model was tested on a real network.« less

  15. Genomic-Enabled Prediction of Ordinal Data with Bayesian Logistic Ordinal Regression.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Burgueño, Juan; Eskridge, Kent

    2015-08-18

    Most genomic-enabled prediction models developed so far assume that the response variable is continuous and normally distributed. The exception is the probit model, developed for ordered categorical phenotypes. In statistical applications, because of the easy implementation of the Bayesian probit ordinal regression (BPOR) model, Bayesian logistic ordinal regression (BLOR) is implemented rarely in the context of genomic-enabled prediction [sample size (n) is much smaller than the number of parameters (p)]. For this reason, in this paper we propose a BLOR model using the Pólya-Gamma data augmentation approach that produces a Gibbs sampler with similar full conditional distributions of the BPOR model and with the advantage that the BPOR model is a particular case of the BLOR model. We evaluated the proposed model by using simulation and two real data sets. Results indicate that our BLOR model is a good alternative for analyzing ordinal data in the context of genomic-enabled prediction with the probit or logit link. Copyright © 2015 Montesinos-López et al.

  16. A Numerical Study of New Logistic Map

    NASA Astrophysics Data System (ADS)

    Khmou, Youssef

    In this paper, we propose a new logistic map based on the relation of the information entropy, we study the bifurcation diagram comparatively to the standard logistic map. In the first part, we compare the obtained diagram, by numerical simulations, with that of the standard logistic map. It is found that the structures of both diagrams are similar where the range of the growth parameter is restricted to the interval [0,e]. In the second part, we present an application of the proposed map in traffic flow using macroscopic model. It is found that the bifurcation diagram is an exact model of the Greenberg’s model of traffic flow where the growth parameter corresponds to the optimal velocity and the random sequence corresponds to the density. In the last part, we present a second possible application of the proposed map which consists of random number generation. The results of the analysis show that the excluded initial values of the sequences are (0,1).

  17. Logistic Achievement Test Scaling and Equating with Fixed versus Estimated Lower Asymptotes.

    ERIC Educational Resources Information Center

    Phillips, S. E.

    This study compared the lower asymptotes estimated by the maximum likelihood procedures of the LOGIST computer program with those obtained via application of the Norton methodology. The study also compared the equating results from the three-parameter logistic model with those obtained from the equipercentile, Rasch, and conditional…

  18. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    PubMed

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.

  19. Synchronization in Biochemical Substance Exchange Between Two Cells

    NASA Astrophysics Data System (ADS)

    Mihailović, Dragutin T.; Balaž, Igor

    In this paper, Mihailović et al. [Mod. Phys. Lett. B 25 (2011) 2407-2417] introduce a simplified model of cell communication in a form of coupled difference logistic equations. Then we investigated stability of exchange of signaling molecules under variability of internal and external parameters. However, we have not touched questions about synchronization and effect of noise on biochemical substance exchange between cells. In this paper, we consider synchronization in intercellular exchange in dependence of environmental and cell intrinsic parameters by analyzing the largest Lyapunov exponent, cross sample entropy and bifurcation maps.

  20. Bernoulli-Langevin Wind Speed Model for Simulation of Storm Events

    NASA Astrophysics Data System (ADS)

    Fürstenau, Norbert; Mittendorf, Monika

    2016-12-01

    We present a simple nonlinear dynamics Langevin model for predicting the instationary wind speed profile during storm events typically accompanying extreme low-pressure situations. It is based on a second-degree Bernoulli equation with δ-correlated Gaussian noise and may complement stationary stochastic wind models. Transition between increasing and decreasing wind speed and (quasi) stationary normal wind and storm states are induced by the sign change of the controlling time-dependent rate parameter k(t). This approach corresponds to the simplified nonlinear laser dynamics for the incoherent to coherent transition of light emission that can be understood by a phase transition analogy within equilibrium thermodynamics [H. Haken, Synergetics, 3rd ed., Springer, Berlin, Heidelberg, New York 1983/2004.]. Evidence for the nonlinear dynamics two-state approach is generated by fitting of two historical wind speed profiles (low-pressure situations "Xaver" and "Christian", 2013) taken from Meteorological Terminal Air Report weather data, with a logistic approximation (i.e. constant rate coefficients k) to the solution of our dynamical model using a sum of sigmoid functions. The analytical solution of our dynamical two-state Bernoulli equation as obtained with a sinusoidal rate ansatz k(t) of period T (=storm duration) exhibits reasonable agreement with the logistic fit to the empirical data. Noise parameter estimates of speed fluctuations are derived from empirical fit residuals and by means of a stationary solution of the corresponding Fokker-Planck equation. Numerical simulations with the Bernoulli-Langevin equation demonstrate the potential for stochastic wind speed profile modeling and predictive filtering under extreme storm events that is suggested for applications in anticipative air traffic management.

  1. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  2. A Normalized Direct Approach for Estimating the Parameters of the Normal Ogive Three-Parameter Model for Ability Tests.

    ERIC Educational Resources Information Center

    Gugel, John F.

    A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…

  3. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  4. A Method of Q-Matrix Validation for the Linear Logistic Test Model

    PubMed Central

    Baghaei, Purya; Hohensinn, Christine

    2017-01-01

    The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721

  5. Some Empirical Evidence for Latent Trait Model Selection.

    ERIC Educational Resources Information Center

    Hutten, Leah R.

    The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…

  6. Development of a Multicomponent Prediction Model for Acute Esophagitis in Lung Cancer Patients Receiving Chemoradiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Ruyck, Kim, E-mail: kim.deruyck@UGent.be; Sabbe, Nick; Oberije, Cary

    2011-10-01

    Purpose: To construct a model for the prediction of acute esophagitis in lung cancer patients receiving chemoradiotherapy by combining clinical data, treatment parameters, and genotyping profile. Patients and Methods: Data were available for 273 lung cancer patients treated with curative chemoradiotherapy. Clinical data included gender, age, World Health Organization performance score, nicotine use, diabetes, chronic disease, tumor type, tumor stage, lymph node stage, tumor location, and medical center. Treatment parameters included chemotherapy, surgery, radiotherapy technique, tumor dose, mean fractionation size, mean and maximal esophageal dose, and overall treatment time. A total of 332 genetic polymorphisms were considered in 112 candidatemore » genes. The predicting model was achieved by lasso logistic regression for predictor selection, followed by classic logistic regression for unbiased estimation of the coefficients. Performance of the model was expressed as the area under the curve of the receiver operating characteristic and as the false-negative rate in the optimal point on the receiver operating characteristic curve. Results: A total of 110 patients (40%) developed acute esophagitis Grade {>=}2 (Common Terminology Criteria for Adverse Events v3.0). The final model contained chemotherapy treatment, lymph node stage, mean esophageal dose, gender, overall treatment time, radiotherapy technique, rs2302535 (EGFR), rs16930129 (ENG), rs1131877 (TRAF3), and rs2230528 (ITGB2). The area under the curve was 0.87, and the false-negative rate was 16%. Conclusion: Prediction of acute esophagitis can be improved by combining clinical, treatment, and genetic factors. A multicomponent prediction model for acute esophagitis with a sensitivity of 84% was constructed with two clinical parameters, four treatment parameters, and four genetic polymorphisms.« less

  7. An extension of trust and TAM model with IDT in the adoption of the electronic logistics information system in HIS in the medical industry.

    PubMed

    Tung, Feng-Cheng; Chang, Su-Chao; Chou, Chi-Min

    2008-05-01

    Ever since National Health Insurance was introduced in 1995, the number of insurants increased to over 96% from 50 to 60%, with a continuous satisfaction rating of about 70%. However, the premium accounted for 5.77% of GDP in 2001 and the Bureau of National Health Insurance had pressing financial difficulties, so it reformed its expenditure systems, such as fee for service, capitation, case payment and the global budget system in order to control the rising medical costs. Since the change in health insurance policy, most hospitals attempted to reduce their operating expenses and improve efficiency. Introducing the electronic logistics information system is one way of reducing the cost of the department of central warehouse and the nursing stations. Hence, the study proposes a technology acceptance research model and examines how nurses' acceptance of the e-logistics information system has been affected in the medical industry. This research combines innovation diffusion theory, technology acceptance model and added two research parameters, trust and perceived financial cost to propose a new hybrid technology acceptance model. Taking Taiwan's medical industry as an experimental example, this paper studies nurses' acceptance of the electronic logistics information system. The structural equation modeling technique was used to evaluate the causal model and confirmatory factor analysis was performed to examine the reliability and validity of the measurement model. The results of the survey strongly support the new hybrid technology acceptance model in predicting nurses' intention to use the electronic logistics information system. The study shows that 'compatibility', 'perceived usefulness', 'perceived ease of use', and 'trust' all have great positive influence on 'behavioral intention to use'. On the other hand 'perceived financial cost' has great negative influence on behavioral intention to use.

  8. Semen molecular and cellular features: these parameters can reliably predict subsequent ART outcome in a goat model

    PubMed Central

    Berlinguer, Fiammetta; Madeddu, Manuela; Pasciu, Valeria; Succu, Sara; Spezzigu, Antonio; Satta, Valentina; Mereu, Paolo; Leoni, Giovanni G; Naitana, Salvatore

    2009-01-01

    Currently, the assessment of sperm function in a raw or processed semen sample is not able to reliably predict sperm ability to withstand freezing and thawing procedures and in vivo fertility and/or assisted reproductive biotechnologies (ART) outcome. The aim of the present study was to investigate which parameters among a battery of analyses could predict subsequent spermatozoa in vitro fertilization ability and hence blastocyst output in a goat model. Ejaculates were obtained by artificial vagina from 3 adult goats (Capra hircus) aged 2 years (A, B and C). In order to assess the predictive value of viability, computer assisted sperm analyzer (CASA) motility parameters and ATP intracellular concentration before and after thawing and of DNA integrity after thawing on subsequent embryo output after an in vitro fertility test, a logistic regression analysis was used. Individual differences in semen parameters were evident for semen viability after thawing and DNA integrity. Results of IVF test showed that spermatozoa collected from A and B lead to higher cleavage rates (0 < 0.01) and blastocysts output (p < 0.05) compared with C. Logistic regression analysis model explained a deviance of 72% (p < 0.0001), directly related with the mean percentage of rapid spermatozoa in fresh semen (p < 0.01), semen viability after thawing (p < 0.01), and with two of the three comet parameters considered, i.e tail DNA percentage and comet length (p < 0.0001). DNA integrity alone had a high predictive value on IVF outcome with frozen/thawed semen (deviance explained: 57%). The model proposed here represents one of the many possible ways to explain differences found in embryo output following IVF with different semen donors and may represent a useful tool to select the most suitable donors for semen cryopreservation. PMID:19900288

  9. Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals

    ERIC Educational Resources Information Center

    Kara, Yusuf; Kamata, Akihito

    2017-01-01

    A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…

  10. Use of nonlinear models for describing scrotal circumference growth in Guzerat bulls raised under grazing conditions.

    PubMed

    Loaiza-Echeverri, A M; Bergmann, J A G; Toral, F L B; Osorio, J P; Carmo, A S; Mendonça, L F; Moustacas, V S; Henry, M

    2013-03-15

    The objective was to use various nonlinear models to describe scrotal circumference (SC) growth in Guzerat bulls on three farms in the state of Minas Gerais, Brazil. The nonlinear models were: Brody, Logistic, Gompertz, Richards, Von Bertalanffy, and Tanaka, where parameter A is the estimated testis size at maturity, B is the integration constant, k is a maturating index and, for the Richards and Tanaka models, m determines the inflection point. In Tanaka, A is an indefinite size of the testis, and B and k adjust the shape and inclination of the curve. A total of 7410 SC records were obtained every 3 months from 1034 bulls with ages varying between 2 and 69 months (<240 days of age = 159; 241-365 days = 451; 366-550 days = 1443; 551-730 days = 1705; and >731 days = 3652 SC measurements). Goodness of fit was evaluated by coefficients of determination (R(2)), error sum of squares, average prediction error (APE), and mean absolute deviation. The Richards model did not reach the convergence criterion. The R(2) were similar for all models (0.68-0.69). The error sum of squares was lowest for the Tanaka model. All models fit the SC data poorly in the early and late periods. Logistic was the model which best estimated SC in the early phase (based on APE and mean absolute deviation). The Tanaka and Logistic models had the lowest APE between 300 and 1600 days of age. The Logistic model was chosen for analysis of the environmental influence on parameters A and k. Based on absolute growth rate, SC increased from 0.019 cm/d, peaking at 0.025 cm/d between 318 and 435 days of age. Farm, year, and season of birth significantly affected size of adult SC and SC growth rate. An increase in SC adult size (parameter A) was accompanied by decreased SC growth rate (parameter k). In conclusion, SC growth in Guzerat bulls was characterized by an accelerated growth phase, followed by decreased growth; this was best represented by the Logistic model. The inflection point occurred at approximately 376 days of age (mean SC of 17.9 cm). We inferred that early selection of testicular size might result in smaller testes at maturity. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Easy and low-cost identification of metabolic syndrome in patients treated with second-generation antipsychotics: artificial neural network and logistic regression models.

    PubMed

    Lin, Chao-Cheng; Bai, Ya-Mei; Chen, Jen-Yeu; Hwang, Tzung-Jeng; Chen, Tzu-Ting; Chiu, Hung-Wen; Li, Yu-Chuan

    2010-03-01

    Metabolic syndrome (MetS) is an important side effect of second-generation antipsychotics (SGAs). However, many SGA-treated patients with MetS remain undetected. In this study, we trained and validated artificial neural network (ANN) and multiple logistic regression models without biochemical parameters to rapidly identify MetS in patients with SGA treatment. A total of 383 patients with a diagnosis of schizophrenia or schizoaffective disorder (DSM-IV criteria) with SGA treatment for more than 6 months were investigated to determine whether they met the MetS criteria according to the International Diabetes Federation. The data for these patients were collected between March 2005 and September 2005. The input variables of ANN and logistic regression were limited to demographic and anthropometric data only. All models were trained by randomly selecting two-thirds of the patient data and were internally validated with the remaining one-third of the data. The models were then externally validated with data from 69 patients from another hospital, collected between March 2008 and June 2008. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of all models. Both the final ANN and logistic regression models had high accuracy (88.3% vs 83.6%), sensitivity (93.1% vs 86.2%), and specificity (86.9% vs 83.8%) to identify MetS in the internal validation set. The mean +/- SD AUC was high for both the ANN and logistic regression models (0.934 +/- 0.033 vs 0.922 +/- 0.035, P = .63). During external validation, high AUC was still obtained for both models. Waist circumference and diastolic blood pressure were the common variables that were left in the final ANN and logistic regression models. Our study developed accurate ANN and logistic regression models to detect MetS in patients with SGA treatment. The models are likely to provide a noninvasive tool for large-scale screening of MetS in this group of patients. (c) 2010 Physicians Postgraduate Press, Inc.

  12. Effects of Ignoring Item Interaction on Item Parameter Estimation and Detection of Interacting Items

    ERIC Educational Resources Information Center

    Chen, Cheng-Te; Wang, Wen-Chung

    2007-01-01

    This study explores the effects of ignoring item interaction on item parameter estimation and the efficiency of using the local dependence index Q[subscript 3] and the SAS NLMIXED procedure to detect item interaction under the three-parameter logistic model and the generalized partial credit model. Through simulations, it was found that ignoring…

  13. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  14. Maximum sustainable yield estimates of Ladypees, Sillago sihama (Forsskål), fishery in Pakistan using the ASPIC and CEDA packages

    NASA Astrophysics Data System (ADS)

    Panhwar, Sher Khan; Liu, Qun; Khan, Fozia; Siddiqui, Pirzada J. A.

    2012-03-01

    Using surplus production model packages of ASPIC (a stock-production model incorporating covariates) and CEDA (Catch effort data analysis), we analyzed the catch and effort data of Sillago sihama fishery in Pakistan. ASPIC estimates the parameters of MSY (maximum sustainable yield), F msy (fishing mortality), q (catchability coefficient), K (carrying capacity or unexploited biomass) and B1/K (maximum sustainable yield over initial biomass). The estimated non-bootstrapped value of MSY based on logistic was 598 t and that based on the Fox model was 415 t, which showed that the Fox model estimation was more conservative than that with the logistic model. The R 2 with the logistic model (0.702) is larger than that with the Fox model (0.541), which indicates a better fit. The coefficient of variation (cv) of the estimated MSY was about 0.3, except for a larger value 88.87 and a smaller value of 0.173. In contrast to the ASPIC results, the R 2 with the Fox model (0.651-0.692) was larger than that with the Schaefer model (0.435-0.567), indicating a better fit. The key parameters of CEDA are: MSY, K, q, and r (intrinsic growth), and the three error assumptions in using the models are normal, log normal and gamma. Parameter estimates from the Schaefer and Pella-Tomlinson models were similar. The MSY estimations from the above two models were 398 t, 549 t and 398 t for normal, log-normal and gamma error distributions, respectively. The MSY estimates from the Fox model were 381 t, 366 t and 366 t for the above three error assumptions, respectively. The Fox model estimates were smaller than those for the Schaefer and the Pella-Tomlinson models. In the light of the MSY estimations of 415 t from ASPIC for the Fox model and 381 t from CEDA for the Fox model, MSY for S. sihama is about 400 t. As the catch in 2003 was 401 t, we would suggest the fishery should be kept at the current level. Production models used here depend on the assumption that CPUE (catch per unit effort) data used in the study can reliably quantify temporal variability in population abundance, hence the modeling results would be wrong if such an assumption is not met. Because the reliability of this CPUE data in indexing fish population abundance is unknown, we should be cautious with the interpretation and use of the derived population and management parameters.

  15. Mixture models for undiagnosed prevalent disease and interval-censored incident disease: applications to a cohort assembled from electronic health records.

    PubMed

    Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A

    2017-09-30

    For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  16. Comparison of the binary logistic and skewed logistic (Scobit) models of injury severity in motor vehicle collisions.

    PubMed

    Tay, Richard

    2016-03-01

    The binary logistic model has been extensively used to analyze traffic collision and injury data where the outcome of interest has two categories. However, the assumption of a symmetric distribution may not be a desirable property in some cases, especially when there is a significant imbalance in the two categories of outcome. This study compares the standard binary logistic model with the skewed logistic model in two cases in which the symmetry assumption is violated in one but not the other case. The differences in the estimates, and thus the marginal effects obtained, are significant when the assumption of symmetry is violated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. What IRT Can and Cannot Do

    ERIC Educational Resources Information Center

    Glas, Cees A. W.

    2009-01-01

    This author states that, while the article by Gunter Maris and Timo Bechger ("On Interpreting the Model Parameters for the Three Parameter Logistic Model," this issue) is highly interesting, the interest is not so much in the practical implications, but rather in the issue of the meaning and role of statistical models in psychometrics and…

  18. Artificial neural networks predict the incidence of portosplenomesenteric venous thrombosis in patients with acute pancreatitis.

    PubMed

    Fei, Y; Hu, J; Li, W-Q; Wang, W; Zong, G-Q

    2017-03-01

    Essentials Predicting the occurrence of portosplenomesenteric vein thrombosis (PSMVT) is difficult. We studied 72 patients with acute pancreatitis. Artificial neural networks modeling was more accurate than logistic regression in predicting PSMVT. Additional predictive factors may be incorporated into artificial neural networks. Objective To construct and validate artificial neural networks (ANNs) for predicting the occurrence of portosplenomesenteric venous thrombosis (PSMVT) and compare the predictive ability of the ANNs with that of logistic regression. Methods The ANNs and logistic regression modeling were constructed using simple clinical and laboratory data of 72 acute pancreatitis (AP) patients. The ANNs and logistic modeling were first trained on 48 randomly chosen patients and validated on the remaining 24 patients. The accuracy and the performance characteristics were compared between these two approaches by SPSS17.0 software. Results The training set and validation set did not differ on any of the 11 variables. After training, the back propagation network training error converged to 1 × 10 -20 , and it retained excellent pattern recognition ability. When the ANNs model was applied to the validation set, it revealed a sensitivity of 80%, specificity of 85.7%, a positive predictive value of 77.6% and negative predictive value of 90.7%. The accuracy was 83.3%. Differences could be found between ANNs modeling and logistic regression modeling in these parameters (10.0% [95% CI, -14.3 to 34.3%], 14.3% [95% CI, -8.6 to 37.2%], 15.7% [95% CI, -9.9 to 41.3%], 11.8% [95% CI, -8.2 to 31.8%], 22.6% [95% CI, -1.9 to 47.1%], respectively). When ANNs modeling was used to identify PSMVT, the area under receiver operating characteristic curve was 0.849 (95% CI, 0.807-0.901), which demonstrated better overall properties than logistic regression modeling (AUC = 0.716) (95% CI, 0.679-0.761). Conclusions ANNs modeling was a more accurate tool than logistic regression in predicting the occurrence of PSMVT following AP. More clinical factors or biomarkers may be incorporated into ANNs modeling to improve its predictive ability. © 2016 International Society on Thrombosis and Haemostasis.

  19. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  20. Composing chaotic music from the letter m

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Anastasios D.

    Chaotic music is composed from a proposed iterative map depicting the letter m, relating the pitch, duration and loudness of successive steps. Each of the two curves of the letter m is based on the classical logistic map. Thus, the generating map is xn+1 = r xn(1/2 - xn) for xn between 0 and 1/2 defining the first curve, and xn+1 = r (xn - 1/2)(1 - xn) for xn between 1/2 and 1 representing the second curve. The parameter r which determines the height(s) of the letter m varies from 2 to 16, the latter value ensuring fully developed chaotic solutions for the whole letter m; r = 8 yielding full chaotic solutions only for its first curve. The m-model yields fixed points, bifurcation points and chaotic regions for each separate curve, as well as values of the parameter r greater than 8 which produce inter-fixed points, inter-bifurcation points and inter-chaotic regions from the interplay of the two curves. Based on this, music is composed from mapping the m- recurrence model solutions onto actual notes. The resulting musical score strongly depends on the sequence of notes chosen by the composer to define the musical range corresponding to the range of the chaotic mathematical solutions x from 0 to 1. Here, two musical ranges are used; one is the middle chromatic scale and the other is the seven- octaves range. At the composer's will and, for aesthetics, within the same composition, notes can be the outcome of different values of r and/or shifted in any octave. Compositions with endings of non-repeating note patterns result from values of r in the m-model that do not produce bifurcations. Scores of chaotic music composed from the m-model and the classical logistic model are presented.

  1. Bayesian Estimation in the One-Parameter Latent Trait Model.

    DTIC Science & Technology

    1980-03-01

    Journal of Mathematical and Statistical Psychology , 1973, 26, 31-44. (a) Andersen, E. B. A goodness of fit test for the Rasch model. Psychometrika, 1973, 28...technique for estimating latent trait mental test parameters. Educational and Psychological Measurement, 1976, 36, 705-715. Lindley, D. V. The...Lord, F. M. An analysis of verbal Scholastic Aptitude Test using Birnbaum’s three-parameter logistic model. Educational and Psychological

  2. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  3. Addressing data privacy in matched studies via virtual pooling.

    PubMed

    Saha-Chaudhuri, P; Weinberg, C R

    2017-09-07

    Data confidentiality and shared use of research data are two desirable but sometimes conflicting goals in research with multi-center studies and distributed data. While ideal for straightforward analysis, confidentiality restrictions forbid creation of a single dataset that includes covariate information of all participants. Current approaches such as aggregate data sharing, distributed regression, meta-analysis and score-based methods can have important limitations. We propose a novel application of an existing epidemiologic tool, specimen pooling, to enable confidentiality-preserving analysis of data arising from a matched case-control, multi-center design. Instead of pooling specimens prior to assay, we apply the methodology to virtually pool (aggregate) covariates within nodes. Such virtual pooling retains most of the information used in an analysis with individual data and since individual participant data is not shared externally, within-node virtual pooling preserves data confidentiality. We show that aggregated covariate levels can be used in a conditional logistic regression model to estimate individual-level odds ratios of interest. The parameter estimates from the standard conditional logistic regression are compared to the estimates based on a conditional logistic regression model with aggregated data. The parameter estimates are shown to be similar to those without pooling and to have comparable standard errors and confidence interval coverage. Virtual data pooling can be used to maintain confidentiality of data from multi-center study and can be particularly useful in research with large-scale distributed data.

  4. A modelling approach to vaccination and contraception programmes for rabies control in fox populations.

    PubMed Central

    Suppo, C; Naulin, J M; Langlais, M; Artois, M

    2000-01-01

    In a previous study, three of the authors designed a one-dimensional model to simulate the propagation of rabies within a growing fox population; the influence of various parameters on the epidemic model was studied, including oral-vaccination programmes. In this work, a two-dimensional model of a fox population having either an exponential or a logistic growth pattern was considered. Using numerical simulations, the efficiencies of two prophylactic methods (fox contraception and vaccination against rabies) were assessed, used either separately or jointly. It was concluded that far lower rates of administration are necessary to eradicate rabies, and that the undesirable side-effects of each programme disappear, when both are used together. PMID:11007334

  5. Fisher Scoring Method for Parameter Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia

    2017-06-01

    GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.

  6. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  7. Logistic regression for circular data

    NASA Astrophysics Data System (ADS)

    Al-Daffaie, Kadhem; Khan, Shahjahan

    2017-05-01

    This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.

  8. Sensitivity study of Space Station Freedom operations cost and selected user resources

    NASA Technical Reports Server (NTRS)

    Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy

    1990-01-01

    The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.

  9. The Mantel-Haenszel procedure revisited: models and generalizations.

    PubMed

    Fidler, Vaclav; Nagelkerke, Nico

    2013-01-01

    Several statistical methods have been developed for adjusting the Odds Ratio of the relation between two dichotomous variables X and Y for some confounders Z. With the exception of the Mantel-Haenszel method, commonly used methods, notably binary logistic regression, are not symmetrical in X and Y. The classical Mantel-Haenszel method however only works for confounders with a limited number of discrete strata, which limits its utility, and appears to have no basis in statistical models. Here we revisit the Mantel-Haenszel method and propose an extension to continuous and vector valued Z. The idea is to replace the observed cell entries in strata of the Mantel-Haenszel procedure by subject specific classification probabilities for the four possible values of (X,Y) predicted by a suitable statistical model. For situations where X and Y can be treated symmetrically we propose and explore the multinomial logistic model. Under the homogeneity hypothesis, which states that the odds ratio does not depend on Z, the logarithm of the odds ratio estimator can be expressed as a simple linear combination of three parameters of this model. Methods for testing the homogeneity hypothesis are proposed. The relationship between this method and binary logistic regression is explored. A numerical example using survey data is presented.

  10. The Mantel-Haenszel Procedure Revisited: Models and Generalizations

    PubMed Central

    Fidler, Vaclav; Nagelkerke, Nico

    2013-01-01

    Several statistical methods have been developed for adjusting the Odds Ratio of the relation between two dichotomous variables X and Y for some confounders Z. With the exception of the Mantel-Haenszel method, commonly used methods, notably binary logistic regression, are not symmetrical in X and Y. The classical Mantel-Haenszel method however only works for confounders with a limited number of discrete strata, which limits its utility, and appears to have no basis in statistical models. Here we revisit the Mantel-Haenszel method and propose an extension to continuous and vector valued Z. The idea is to replace the observed cell entries in strata of the Mantel-Haenszel procedure by subject specific classification probabilities for the four possible values of (X,Y) predicted by a suitable statistical model. For situations where X and Y can be treated symmetrically we propose and explore the multinomial logistic model. Under the homogeneity hypothesis, which states that the odds ratio does not depend on Z, the logarithm of the odds ratio estimator can be expressed as a simple linear combination of three parameters of this model. Methods for testing the homogeneity hypothesis are proposed. The relationship between this method and binary logistic regression is explored. A numerical example using survey data is presented. PMID:23516463

  11. Estimation of Two-Parameter Logistic Item Response Curves. Research Report 83-1. Mathematical Sciences Technical Report No. 130.

    ERIC Educational Resources Information Center

    Tsutakawa, Robert K.

    This paper presents a method for estimating certain characteristics of test items which are designed to measure ability, or knowledge, in a particular area. Under the assumption that ability parameters are sampled from a normal distribution, the EM algorithm is used to derive maximum likelihood estimates to item parameters of the two-parameter…

  12. An Optimal Hierarchical Decision Model for a Regional Logistics Network with Environmental Impact Consideration

    PubMed Central

    Zhang, Dezhi; Li, Shuangyan

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209

  13. An optimal hierarchical decision model for a regional logistics network with environmental impact consideration.

    PubMed

    Zhang, Dezhi; Li, Shuangyan; Qin, Jin

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.

  14. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    NASA Astrophysics Data System (ADS)

    Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.

  15. Guessing and the Rasch Model

    ERIC Educational Resources Information Center

    Holster, Trevor A.; Lake, J.

    2016-01-01

    Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…

  16. Hierarchical Bayesian Logistic Regression to forecast metabolic control in type 2 DM patients.

    PubMed

    Dagliati, Arianna; Malovini, Alberto; Decata, Pasquale; Cogni, Giulia; Teliti, Marsida; Sacchi, Lucia; Cerra, Carlo; Chiovato, Luca; Bellazzi, Riccardo

    2016-01-01

    In this work we present our efforts in building a model able to forecast patients' changes in clinical conditions when repeated measurements are available. In this case the available risk calculators are typically not applicable. We propose a Hierarchical Bayesian Logistic Regression model, which allows taking into account individual and population variability in model parameters estimate. The model is used to predict metabolic control and its variation in type 2 diabetes mellitus. In particular we have analyzed a population of more than 1000 Italian type 2 diabetic patients, collected within the European project Mosaic. The results obtained in terms of Matthews Correlation Coefficient are significantly better than the ones gathered with standard logistic regression model, based on data pooling.

  17. Planning the City Logistics Terminal Location by Applying the Green p-Median Model and Type-2 Neurofuzzy Network

    PubMed Central

    Pamučar, Dragan; Vasin, Ljubislav; Atanasković, Predrag; Miličić, Milica

    2016-01-01

    The paper herein presents green p-median problem (GMP) which uses the adaptive type-2 neural network for the processing of environmental and sociological parameters including costs of logistics operators and demonstrates the influence of these parameters on planning the location for the city logistics terminal (CLT) within the discrete network. CLT shows direct effects on increment of traffic volume especially in urban areas, which further results in negative environmental effects such as air pollution and noise as well as increased number of urban populations suffering from bronchitis, asthma, and similar respiratory infections. By applying the green p-median model (GMM), negative effects on environment and health in urban areas caused by delivery vehicles may be reduced to minimum. This model creates real possibilities for making the proper investment decisions so as profitable investments may be realized in the field of transport infrastructure. The paper herein also includes testing of GMM in real conditions on four CLT locations in Belgrade City zone. PMID:27195005

  18. Planning the City Logistics Terminal Location by Applying the Green p-Median Model and Type-2 Neurofuzzy Network.

    PubMed

    Pamučar, Dragan; Vasin, Ljubislav; Atanasković, Predrag; Miličić, Milica

    2016-01-01

    The paper herein presents green p-median problem (GMP) which uses the adaptive type-2 neural network for the processing of environmental and sociological parameters including costs of logistics operators and demonstrates the influence of these parameters on planning the location for the city logistics terminal (CLT) within the discrete network. CLT shows direct effects on increment of traffic volume especially in urban areas, which further results in negative environmental effects such as air pollution and noise as well as increased number of urban populations suffering from bronchitis, asthma, and similar respiratory infections. By applying the green p-median model (GMM), negative effects on environment and health in urban areas caused by delivery vehicles may be reduced to minimum. This model creates real possibilities for making the proper investment decisions so as profitable investments may be realized in the field of transport infrastructure. The paper herein also includes testing of GMM in real conditions on four CLT locations in Belgrade City zone.

  19. Deletion Diagnostics for Alternating Logistic Regressions

    PubMed Central

    Preisser, John S.; By, Kunthel; Perin, Jamie; Qaqish, Bahjat F.

    2013-01-01

    Deletion diagnostics are introduced for the regression analysis of clustered binary outcomes estimated with alternating logistic regressions, an implementation of generalized estimating equations (GEE) that estimates regression coefficients in a marginal mean model and in a model for the intracluster association given by the log odds ratio. The diagnostics are developed within an estimating equations framework that recasts the estimating functions for association parameters based upon conditional residuals into equivalent functions based upon marginal residuals. Extensions of earlier work on GEE diagnostics follow directly, including computational formulae for one-step deletion diagnostics that measure the influence of a cluster of observations on the estimated regression parameters and on the overall marginal mean or association model fit. The diagnostic formulae are evaluated with simulations studies and with an application concerning an assessment of factors associated with health maintenance visits in primary care medical practices. The application and the simulations demonstrate that the proposed cluster-deletion diagnostics for alternating logistic regressions are good approximations of their exact fully iterated counterparts. PMID:22777960

  20. On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density

    NASA Astrophysics Data System (ADS)

    Dorini, F. A.; Cecconello, M. S.; Dorini, L. B.

    2016-04-01

    It is recognized that handling uncertainty is essential to obtain more reliable results in modeling and computer simulation. This paper aims to discuss the logistic equation subject to uncertainties in two parameters: the environmental carrying capacity, K, and the initial population density, N0. We first provide the closed-form results for the first probability density function of time-population density, N(t), and its inflection point, t*. We then use the Maximum Entropy Principle to determine both K and N0 density functions, treating such parameters as independent random variables and considering fluctuations of their values for a situation that commonly occurs in practice. Finally, closed-form results for the density functions and statistical moments of N(t), for a fixed t > 0, and of t* are provided, considering the uniform distribution case. We carried out numerical experiments to validate the theoretical results and compared them against that obtained using Monte Carlo simulation.

  1. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  2. Calibration and LOD/LOQ estimation of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs expressed in E. coli using a four-parameter logistic model.

    PubMed

    Lee, K R; Dipaolo, B; Ji, X

    2000-06-01

    Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.

  3. Estimated harvesting on jellyfish in Sarawak

    NASA Astrophysics Data System (ADS)

    Bujang, Noriham; Hassan, Aimi Nuraida Ali

    2017-04-01

    There are three species of jellyfish recorded in Sarawak which are the Lobonema smithii (white jellyfish), Rhopilema esculenta (red jellyfish) and Mastigias papua. This study focused on two particular species which are L.smithii and R.esculenta. This study was done to estimate the highest carrying capacity and the population growth rate of both species by using logistic growth model. The maximum sustainable yield for the harvesting of this species was also determined. The unknown parameters in the logistic model were estimated using center finite different method. As for the results, it was found that the carrying capacity for L.smithii and R.esculenta were 4594.9246456819 tons and 5855.9894242086 tons respectively. Whereas, the population growth rate for both L.smithii and R.esculenta were estimated at 2.1800463754 and 1.144864086 respectively. Hence, the estimated maximum sustainable yield for harvesting for L.smithii and R.esculenta were 2504.2872047638 tons and 1676.0779949431 tons per year.

  4. Numerical solution of a logistic growth model for a population with Allee effect considering fuzzy initial values and fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Amarti, Z.; Nurkholipah, N. S.; Anggriani, N.; Supriatna, A. K.

    2018-03-01

    Predicting the future of population number is among the important factors that affect the consideration in preparing a good management for the population. This has been done by various known method, one among them is by developing a mathematical model describing the growth of the population. The model usually takes form in a differential equation or a system of differential equations, depending on the complexity of the underlying properties of the population. The most widely used growth models currently are those having a sigmoid solution of time series, including the Verhulst logistic equation and the Gompertz equation. In this paper we consider the Allee effect of the Verhulst’s logistic population model. The Allee effect is a phenomenon in biology showing a high correlation between population size or density and the mean individual fitness of the population. The method used to derive the solution is the Runge-Kutta numerical scheme, since it is in general regarded as one among the good numerical scheme which is relatively easy to implement. Further exploration is done via the fuzzy theoretical approach to accommodate the impreciseness of the initial values and parameters in the model.

  5. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  6. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  7. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  8. On the effects of nonlinear boundary conditions in diffusive logistic equations on bounded domains

    NASA Astrophysics Data System (ADS)

    Cantrell, Robert Stephen; Cosner, Chris

    We study a diffusive logistic equation with nonlinear boundary conditions. The equation arises as a model for a population that grows logistically inside a patch and crosses the patch boundary at a rate that depends on the population density. Specifically, the rate at which the population crosses the boundary is assumed to decrease as the density of the population increases. The model is motivated by empirical work on the Glanville fritillary butterfly. We derive local and global bifurcation results which show that the model can have multiple equilibria and in some parameter ranges can support Allee effects. The analysis leads to eigenvalue problems with nonstandard boundary conditions.

  9. Spreading speeds for a two-species competition-diffusion system

    NASA Astrophysics Data System (ADS)

    Carrère, Cécile

    2018-02-01

    In this paper, spreading properties of a competition-diffusion system of two equations are studied. This system models the invasion of an empty favorable habitat, by two competing species, each obeying a logistic growth equation, such that any coexistence state is unstable. If the two species are initially absent from the right half-line x > 0, and the slowest one dominates the fastest one on x < 0, then the latter will invade the right space at its Fisher-KPP speed, and will be replaced by or will invade the former, depending on the parameters, at a slower speed. Thus, the system forms a propagating terrace, linking an unstable state to two consecutive stable states.

  10. Analysis test of understanding of vectors with the three-parameter logistic model of item response theory and item response curves technique

    NASA Astrophysics Data System (ADS)

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-12-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.

  11. Decision Tree Approach for Soil Liquefaction Assessment

    PubMed Central

    Gandomi, Amir H.; Fridline, Mark M.; Roke, David A.

    2013-01-01

    In the current study, the performances of some decision tree (DT) techniques are evaluated for postearthquake soil liquefaction assessment. A database containing 620 records of seismic parameters and soil properties is used in this study. Three decision tree techniques are used here in two different ways, considering statistical and engineering points of view, to develop decision rules. The DT results are compared to the logistic regression (LR) model. The results of this study indicate that the DTs not only successfully predict liquefaction but they can also outperform the LR model. The best DT models are interpreted and evaluated based on an engineering point of view. PMID:24489498

  12. Decision tree approach for soil liquefaction assessment.

    PubMed

    Gandomi, Amir H; Fridline, Mark M; Roke, David A

    2013-01-01

    In the current study, the performances of some decision tree (DT) techniques are evaluated for postearthquake soil liquefaction assessment. A database containing 620 records of seismic parameters and soil properties is used in this study. Three decision tree techniques are used here in two different ways, considering statistical and engineering points of view, to develop decision rules. The DT results are compared to the logistic regression (LR) model. The results of this study indicate that the DTs not only successfully predict liquefaction but they can also outperform the LR model. The best DT models are interpreted and evaluated based on an engineering point of view.

  13. Modeling the Severity of Drinking Consequences in First-Year College Women: An Item Response Theory Analysis of the Rutgers Alcohol Problem Index*

    PubMed Central

    Cohn, Amy M.; Hagman, Brett T.; Graff, Fiona S.; Noel, Nora E.

    2011-01-01

    Objective: The present study examined the latent continuum of alcohol-related negative consequences among first-year college women using methods from item response theory and classical test theory. Method: Participants (N = 315) were college women in their freshman year who reported consuming any alcohol in the past 90 days and who completed assessments of alcohol consumption and alcohol-related negative consequences using the Rutgers Alcohol Problem Index. Results: Item response theory analyses showed poor model fit for five items identified in the Rutgers Alcohol Problem Index. Two-parameter item response theory logistic models were applied to the remaining 18 items to examine estimates of item difficulty (i.e., severity) and discrimination parameters. The item difficulty parameters ranged from 0.591 to 2.031, and the discrimination parameters ranged from 0.321 to 2.371. Classical test theory analyses indicated that the omission of the five misfit items did not significantly alter the psychometric properties of the construct. Conclusions: Findings suggest that those consequences that had greater severity and discrimination parameters may be used as screening items to identify female problem drinkers at risk for an alcohol use disorder. PMID:22051212

  14. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  15. Sperm function and assisted reproduction technology

    PubMed Central

    MAAß, GESA; BÖDEKER, ROLF‐HASSO; SCHEIBELHUT, CHRISTINE; STALF, THOMAS; MEHNERT, CLAAS; SCHUPPE, HANS‐CHRISTIAN; JUNG, ANDREAS; SCHILL, WOLF‐BERNHARD

    2005-01-01

    The evaluation of different functional sperm parameters has become a tool in andrological diagnosis. These assays determine the sperm's capability to fertilize an oocyte. It also appears that sperm functions and semen parameters are interrelated and interdependent. Therefore, the question arose whether a given laboratory test or a battery of tests can predict the outcome in in vitro fertilization (IVF). One‐hundred and sixty‐one patients who underwent an IVF treatment were selected from a database of 4178 patients who had been examined for male infertility 3 months before or after IVF. Sperm concentration, motility, acrosin activity, acrosome reaction, sperm morphology, maternal age, number of transferred embryos, embryo score, fertilization rate and pregnancy rate were determined. In addition, logistic regression models to describe fertilization rate and pregnancy were developed. All the parameters in the models were dichotomized and intra‐ and interindividual variability of the parameters were assessed. Although the sperm parameters showed good correlations with IVF when correlated separately, the only essential parameter in the multivariate model was morphology. The enormous intra‐ and interindividual variability of the values was striking. In conclusion, our data indicate that the andrological status at the end of the respective treatment does not necessarily represent the status at the time of IVF. Despite a relatively low correlation coefficient in the logistic regression model, it appears that among the parameters tested, the most reliable parameter to predict fertilization is normal sperm morphology. (Reprod Med Biol 2005; 4: 7–30) PMID:29699207

  16. A general diagnostic model applied to language testing data.

    PubMed

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  17. Accuracy and Variability of Item Parameter Estimates from Marginal Maximum a Posteriori Estimation and Bayesian Inference via Gibbs Samplers

    ERIC Educational Resources Information Center

    Wu, Yi-Fang

    2015-01-01

    Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…

  18. Fractional Order Spatiotemporal Chaos with Delay in Spatial Nonlinear Coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Yingqian; Wang, Xingyuan; Liu, Liyan; Liu, Jia

    We investigate the spatiotemporal dynamics with fractional order differential logistic map with delay under nonlinear chaotic maps for spatial coupling connections. Here, the coupling methods between lattices are the nonlinear chaotic map coupling of lattices. The fractional order differential logistic map with delay breaks the limits of the range of parameter μ ∈ [3.75, 4] in the classical logistic map for chaotic states. The Kolmogorov-Sinai entropy density and universality, and bifurcation diagrams are employed to investigate the chaotic behaviors of the proposed model in this paper. The proposed model can also be applied for cryptography, which is verified in a color image encryption scheme in this paper.

  19. Differentiation of orbital lymphoma and idiopathic orbital inflammatory pseudotumor: combined diagnostic value of conventional MRI and histogram analysis of ADC maps.

    PubMed

    Ren, Jiliang; Yuan, Ying; Wu, Yingwei; Tao, Xiaofeng

    2018-05-02

    The overlap of morphological feature and mean ADC value restricted clinical application of MRI in the differential diagnosis of orbital lymphoma and idiopathic orbital inflammatory pseudotumor (IOIP). In this paper, we aimed to retrospectively evaluate the combined diagnostic value of conventional magnetic resonance imaging (MRI) and whole-tumor histogram analysis of apparent diffusion coefficient (ADC) maps in the differentiation of the two lesions. In total, 18 patients with orbital lymphoma and 22 patients with IOIP were included, who underwent both conventional MRI and diffusion weighted imaging before treatment. Conventional MRI features and histogram parameters derived from ADC maps, including mean ADC (ADC mean ), median ADC (ADC median ), skewness, kurtosis, 10th, 25th, 75th and 90th percentiles of ADC (ADC 10 , ADC 25 , ADC 75 , ADC 90 ) were evaluated and compared between orbital lymphoma and IOIP. Multivariate logistic regression analysis was used to identify the most valuable variables for discriminating. Differential model was built upon the selected variables and receiver operating characteristic (ROC) analysis was also performed to determine the differential ability of the model. Multivariate logistic regression showed ADC 10 (P = 0.023) and involvement of orbit preseptal space (P = 0.029) were the most promising indexes in the discrimination of orbital lymphoma and IOIP. The logistic model defined by ADC 10 and involvement of orbit preseptal space was built, which achieved an AUC of 0.939, with sensitivity of 77.30% and specificity of 94.40%. Conventional MRI feature of involvement of orbit preseptal space and ADC histogram parameter of ADC 10 are valuable in differential diagnosis of orbital lymphoma and IOIP.

  20. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  1. Design logistics performance measurement model of automotive component industry for srengthening competitiveness of dealing AEC 2015

    NASA Astrophysics Data System (ADS)

    Amran, T. G.; Janitra Yose, Mindy

    2018-03-01

    As the free trade Asean Economic Community (AEC) causes the tougher competition, it is important that Indonesia’s automotive industry have high competitiveness as well. A model of logistics performance measurement was designed as an evaluation tool for automotive component companies to improve their logistics performance in order to compete in AEC. The design of logistics performance measurement model was based on the Logistics Scorecard perspectives, divided into two stages: identifying the logistics business strategy to get the KPI and arranging the model. 23 KPI was obtained. The measurement result can be taken into consideration of determining policies to improve the performance logistics competitiveness.

  2. Dynamics of a minimal consumer network with bi-directional influence

    NASA Astrophysics Data System (ADS)

    Ekaterinchuk, Ekaterina; Jungeilges, Jochen; Ryazanova, Tatyana; Sushko, Iryna

    2018-05-01

    We study the dynamics of a model of interdependent consumer behavior defined by a family of two-dimensional noninvertible maps. This family belongs to a class of coupled logistic maps with different nonlinearity parameters and coupling terms that depend on one variable only. In our companion paper we considered the case of independent consumers as well as the case of uni-directionally connected consumers. The present paper aims at describing the dynamics in the case of a bi-directional connection. In particular, we investigate the bifurcation structure of the parameter plane associated with the strength of coupling between the consumers, focusing on the mechanisms of qualitative transformations of coexisting attractors and their basins of attraction.

  3. Planning the location of facilities to implement a reverse logistic system of post-consumer packaging using a location mathematical model.

    PubMed

    Couto, Maria Claudia Lima; Lange, Liséte Celina; Rosa, Rodrigo de Alvarenga; Couto, Paula Rogeria Lima

    2017-12-01

    The implementation of reverse logistics systems (RLS) for post-consumer products provides environmental and economic benefits, since it increases recycling potential. However, RLS implantation and consolidation still face problems. The main shortcomings are the high costs and the low expectation of broad implementation worldwide. This paper presents two mathematical models to decide the number and the location of screening centers (SCs) and valorization centers (VCs) to implement reverse logistics of post-consumer packages, defining the optimum territorial arrangements (OTAs), allowing the inclusion of small and medium size municipalities. The paper aims to fill a gap in the literature on RLS location facilities that not only aim at revenue optimization, but also the participation of the population, the involvement of pickers and the service universalization. The results showed that implementation of VCs can lead to revenue/cost ratio higher than 100%. The results of this study can supply companies and government agencies with a global view on the parameters that influence RLS sustainability and help them make decisions about the location of these facilities and the best reverse flows with the social inclusion of pickers and serving the population of small and medium-sized municipalities.

  4. Dose-escalation designs in oncology: ADEPT and the CRM.

    PubMed

    Shu, Jianfen; O'Quigley, John

    2008-11-20

    The ADEPT software package is not a statistical method in its own right as implied by Gerke and Siedentop (Statist. Med. 2008; DOI: 10.1002/sim.3037). ADEPT implements two-parameter CRM models as described in O'Quigley et al. (Biometrics 1990; 46(1):33-48). All of the basic ideas (use of a two-parameter logistic model, use of a two-dimensional prior for the unknown slope and intercept parameters, sequential estimation and subsequent patient allocation based on minimization of some loss function, flexibility to use cohorts instead of one by one inclusion) are strictly identical. The only, and quite trivial, difference arises in the setting of the prior. O'Quigley et al. (Biometrics 1990; 46(1):33-48) used priors having an analytic expression whereas Whitehead and Brunier (Statist. Med. 1995; 14:33-48) use pseudo-data to play the role of the prior. The question of interest is whether two-parameter CRM works as well, or better, than the one-parameter CRM recommended in O'Quigley et al. (Biometrics 1990; 46(1):33-48). Gerke and Siedentop argue that it does. The published literature suggests otherwise. The conclusions of Gerke and Siedentop stem from three highly particular, and somewhat contrived, situations. Unlike one-parameter CRM (Biometrika 1996; 83:395-405; J. Statist. Plann. Inference 2006; 136:1765-1780; Biometrika 2005; 92:863-873), no statistical properties appear to have been studied for two-parameter CRM. In particular, for two-parameter CRM, the parameter estimates are inconsistent. This ought to be a source of major concern to those proposing its use. Worse still, for finite samples the behavior of estimates can be quite wild despite having incorporated the kind of dampening priors discussed by Gerke and Siedentop. An example in which we illustrate this behavior describes a single patient included at level 1 of 6 levels and experiencing a dose limiting toxicity. The subsequent recommendation is to experiment at level 6! Such problematic behavior is not common. Even so, we show that the allocation behavior of two-parameter CRM is very much less stable than that of one-parameter CRM.

  5. Stochastic growth logistic model with aftereffect for batch fermentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah

    2014-06-19

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  6. Stochastic growth logistic model with aftereffect for batch fermentation process

    NASA Astrophysics Data System (ADS)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md

    2014-06-01

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  7. Nowcasting sunshine number using logistic modeling

    NASA Astrophysics Data System (ADS)

    Brabec, Marek; Badescu, Viorel; Paulescu, Marius

    2013-04-01

    In this paper, we present a formalized approach to statistical modeling of the sunshine number, binary indicator of whether the Sun is covered by clouds introduced previously by Badescu (Theor Appl Climatol 72:127-136, 2002). Our statistical approach is based on Markov chain and logistic regression and yields fully specified probability models that are relatively easily identified (and their unknown parameters estimated) from a set of empirical data (observed sunshine number and sunshine stability number series). We discuss general structure of the model and its advantages, demonstrate its performance on real data and compare its results to classical ARIMA approach as to a competitor. Since the model parameters have clear interpretation, we also illustrate how, e.g., their inter-seasonal stability can be tested. We conclude with an outlook to future developments oriented to construction of models allowing for practically desirable smooth transition between data observed with different frequencies and with a short discussion of technical problems that such a goal brings.

  8. Spatiotemporal chaos of fractional order logistic equation in nonlinear coupled lattices

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Qian; Wang, Xing-Yuan; Liu, Li-Yan; He, Yi; Liu, Jia

    2017-11-01

    We investigate a new spatiotemporal dynamics with fractional order differential logistic map and spatial nonlinear coupling. The spatial nonlinear coupling features such as the higher percentage of lattices in chaotic behaviors for most of parameters and none periodic windows in bifurcation diagrams are held, which are more suitable for encryptions than the former adjacent coupled map lattices. Besides, the proposed model has new features such as the wider parameter range and wider range of state amplitude for ergodicity, which contributes a wider range of key space when applied in encryptions. The simulations and theoretical analyses are developed in this paper.

  9. Anaerobic digestion of amine-oxide-based surfactants: biodegradation kinetics and inhibitory effects.

    PubMed

    Ríos, Francisco; Lechuga, Manuela; Fernández-Arteaga, Alejandro; Jurado, Encarnación; Fernández-Serrano, Mercedes

    2017-08-01

    Recently, anaerobic degradation has become a prevalent alternative for the treatment of wastewater and activated sludge. Consequently, the anaerobic biodegradability of recalcitrant compounds such as some surfactants require a thorough study to avoid their presence in the environment. In this work, the anaerobic biodegradation of amine-oxide-based surfactants, which are toxic to several organisms, was studied by measuring of the biogas production in digested sludge. Three amine-oxide-based surfactants with structural differences in their hydrophobic alkyl chain were tested: Lauramine oxide (AO-R 12 ), Myristamine oxide (AO-R 14 ) and Cocamidopropylamine oxide (AO-cocoamido). Results show that AO-R 12 and AO-R 14 inhibit biogas production, inhibition percentages were around 90%. AO-cocoamido did not cause inhibition and it was biodegraded until reaching a percentage of 60.8%. Otherwise, we fitted the production of biogas to two kinetic models, to a pseudo first-order model and to a logistic model. Production of biogas during the anaerobic biodegradation of AO-cocoamido was pretty good adjusted to the logistics model. Kinetic parameters were also determined. This modelling is useful to predict their behaviour in wastewater treatment plants and under anaerobic conditions in the environment.

  10. A multimodal logistics service network design with time windows and environmental concerns

    PubMed Central

    Zhang, Dezhi; He, Runzhong; Wang, Zhongwei

    2017-01-01

    The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained. PMID:28934272

  11. A multimodal logistics service network design with time windows and environmental concerns.

    PubMed

    Zhang, Dezhi; He, Runzhong; Li, Shuangyan; Wang, Zhongwei

    2017-01-01

    The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained.

  12. The Association between Bone Quality and Atherosclerosis: Results from Two Large Population-Based Studies.

    PubMed

    Lange, V; Dörr, M; Schminke, U; Völzke, H; Nauck, M; Wallaschofski, H; Hannemann, A

    2017-01-01

    It is highly debated whether associations between osteoporosis and atherosclerosis are independent of cardiovascular risk factors. We aimed to explore the associations between quantitative ultrasound (QUS) parameters at the heel with the carotid artery intima-media thickness (IMT), the presence of carotid artery plaques, and the ankle-brachial index (ABI). The study population comprised 5680 men and women aged 20-93 years from two population-based cohort studies: Study of Health in Pomerania (SHIP) and SHIP-Trend. QUS measurements were performed at the heel. The extracranial carotid arteries were examined with B-mode ultrasonography. ABI was measured in a subgroup of 3853 participants. Analyses of variance and linear and logistic regression models were calculated and adjusted for major cardiovascular risk factors. Men but not women had significantly increased odds for carotid artery plaques with decreasing QUS parameters independent of diabetes mellitus, dyslipidemia, and hypertension. Beyond this, the QUS parameters were not significantly associated with IMT or ABI in fully adjusted models. Our data argue against an independent role of bone metabolism in atherosclerotic changes in women. Yet, in men, associations with advanced atherosclerosis, exist. Thus, men presenting with clinical signs of osteoporosis may be at increased risk for atherosclerotic disease.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, W.J.; Kalasinski, L.A.

    In this paper, a generalized logistic regression model for correlated observations is used to analyze epidemiologic data on the frequency of spontaneous abortion among a group of women office workers. The results are compared to those obtained from the use of the standard logistic regression model that assumes statistical independence among all the pregnancies contributed by one woman. In this example, the correlation among pregnancies from the same woman is fairly small and did not have a substantial impact on the magnitude of estimates of parameters of the model. This is due at least partly to the small average numbermore » of pregnancies contributed by each woman.« less

  14. Risk adjustment in the American College of Surgeons National Surgical Quality Improvement Program: a comparison of logistic versus hierarchical modeling.

    PubMed

    Cohen, Mark E; Dimick, Justin B; Bilimoria, Karl Y; Ko, Clifford Y; Richards, Karen; Hall, Bruce Lee

    2009-12-01

    Although logistic regression has commonly been used to adjust for risk differences in patient and case mix to permit quality comparisons across hospitals, hierarchical modeling has been advocated as the preferred methodology, because it accounts for clustering of patients within hospitals. It is unclear whether hierarchical models would yield important differences in quality assessments compared with logistic models when applied to American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) data. Our objective was to evaluate differences in logistic versus hierarchical modeling for identifying hospitals with outlying outcomes in the ACS-NSQIP. Data from ACS-NSQIP patients who underwent colorectal operations in 2008 at hospitals that reported at least 100 operations were used to generate logistic and hierarchical prediction models for 30-day morbidity and mortality. Differences in risk-adjusted performance (ratio of observed-to-expected events) and outlier detections from the two models were compared. Logistic and hierarchical models identified the same 25 hospitals as morbidity outliers (14 low and 11 high outliers), but the hierarchical model identified 2 additional high outliers. Both models identified the same eight hospitals as mortality outliers (five low and three high outliers). The values of observed-to-expected events ratios and p values from the two models were highly correlated. Results were similar when data were permitted from hospitals providing < 100 patients. When applied to ACS-NSQIP data, logistic and hierarchical models provided nearly identical results with respect to identification of hospitals' observed-to-expected events ratio outliers. As hierarchical models are prone to implementation problems, logistic regression will remain an accurate and efficient method for performing risk adjustment of hospital quality comparisons.

  15. Estimation of a Nonlinear Intervention Phase Trajectory for Multiple-Baseline Design Data

    ERIC Educational Resources Information Center

    Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2015-01-01

    A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…

  16. A model for incomplete longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C

    2008-12-30

    In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.

  17. Simple cosmological model with inflation and late times acceleration

    NASA Astrophysics Data System (ADS)

    Szydłowski, Marek; Stachowski, Aleksander

    2018-03-01

    In the framework of polynomial Palatini cosmology, we investigate a simple cosmological homogeneous and isotropic model with matter in the Einstein frame. We show that in this model during cosmic evolution, early inflation appears and the accelerating phase of the expansion for the late times. In this frame we obtain the Friedmann equation with matter and dark energy in the form of a scalar field with a potential whose form is determined in a covariant way by the Ricci scalar of the FRW metric. The energy density of matter and dark energy are also parameterized through the Ricci scalar. Early inflation is obtained only for an infinitesimally small fraction of energy density of matter. Between the matter and dark energy, there exists an interaction because the dark energy is decaying. For the characterization of inflation we calculate the slow roll parameters and the constant roll parameter in terms of the Ricci scalar. We have found a characteristic behavior of the time dependence of density of dark energy on the cosmic time following the logistic-like curve which interpolates two almost constant value phases. From the required numbers of N-folds we have found a bound on the model parameter.

  18. New models to predict depth of infiltration in endometrial carcinoma based on transvaginal sonography.

    PubMed

    De Smet, F; De Brabanter, J; Van den Bosch, T; Pochet, N; Amant, F; Van Holsbeke, C; Moerman, P; De Moor, B; Vergote, I; Timmerman, D

    2006-06-01

    Preoperative knowledge of the depth of myometrial infiltration is important in patients with endometrial carcinoma. This study aimed at assessing the value of histopathological parameters obtained from an endometrial biopsy (Pipelle de Cornier; results available preoperatively) and ultrasound measurements obtained after transvaginal sonography with color Doppler imaging in the preoperative prediction of the depth of myometrial invasion, as determined by the final histopathological examination of the hysterectomy specimen (the gold standard). We first collected ultrasound and histopathological data from 97 consecutive women with endometrial carcinoma and divided them into two groups according to surgical stage (Stages Ia and Ib vs. Stages Ic and higher). The areas (AUC) under the receiver-operating characteristics curves of the subjective assessment of depth of invasion by an experienced gynecologist and of the individual ultrasound parameters were calculated. Subsequently, we used these variables to train a logistic regression model and least squares support vector machines (LS-SVM) with linear and RBF (radial basis function) kernels. Finally, these models were validated prospectively on data from 76 new patients in order to make a preoperative prediction of the depth of invasion. Of all ultrasound parameters, the ratio of the endometrial and uterine volumes had the largest AUC (78%), while that of the subjective assessment was 79%. The AUCs of the blood flow indices were low (range, 51-64%). Stepwise logistic regression selected the degree of differentiation, the number of fibroids, the endometrial thickness and the volume of the tumor. Compared with the AUC of the subjective assessment (72%), prospective evaluation of the mathematical models resulted in a higher AUC for the LS-SVM model with an RBF kernel (77%), but this difference was not significant. Single morphological parameters do not improve the predictive power when compared with the subjective assessment of depth of myometrial invasion of endometrial cancer, and blood flow indices do not contribute to the prediction of stage. In this study an LS-SVM model with an RBF kernel gave the best prediction; while this might be more reliable than subjective assessment, confirmation by larger prospective studies is required. Copyright 2006 ISUOG. Published by John Wiley & Sons, Ltd.

  19. Non-ignorable missingness in logistic regression.

    PubMed

    Wang, Joanna J J; Bartlett, Mark; Ryan, Louise

    2017-08-30

    Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non-ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non-identifiable under non-ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow-up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality-of-life. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Transformation Model Choice in Nonlinear Regression Analysis of Fluorescence-based Serial Dilution Assays

    PubMed Central

    Fong, Youyi; Yu, Xuesong

    2016-01-01

    Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502

  1. C*-algebras associated with reversible extensions of logistic maps

    NASA Astrophysics Data System (ADS)

    Kwaśniewski, Bartosz K.

    2012-10-01

    The construction of reversible extensions of dynamical systems presented in a previous paper by the author and A.V. Lebedev is enhanced, so that it applies to arbitrary mappings (not necessarily with open range). It is based on calculating the maximal ideal space of C*-algebras that extends endomorphisms to partial automorphisms via partial isometric representations, and involves a new set of 'parameters' (the role of parameters is played by chosen sets or ideals). As model examples, we give a thorough description of reversible extensions of logistic maps and a classification of systems associated with compression of unitaries generating homeomorphisms of the circle. Bibliography: 34 titles.

  2. [Calculating Pearson residual in logistic regressions: a comparison between SPSS and SAS].

    PubMed

    Xu, Hao; Zhang, Tao; Li, Xiao-song; Liu, Yuan-yuan

    2015-01-01

    To compare the results of Pearson residual calculations in logistic regression models using SPSS and SAS. We reviewed Pearson residual calculation methods, and used two sets of data to test logistic models constructed by SPSS and STATA. One model contained a small number of covariates compared to the number of observed. The other contained a similar number of covariates as the number of observed. The two software packages produced similar Pearson residual estimates when the models contained a similar number of covariates as the number of observed, but the results differed when the number of observed was much greater than the number of covariates. The two software packages produce different results of Pearson residuals, especially when the models contain a small number of covariates. Further studies are warranted.

  3. Gene selection in cancer classification using sparse logistic regression with Bayesian regularization.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2006-10-01

    Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than the RVM, which are of great importance in medical applications, with similar computational expense. A MATLAB implementation of the sparse logistic regression algorithm with Bayesian regularization (BLogReg) is available from http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/

  4. Determination of riverbank erosion probability using Locally Weighted Logistic Regression

    NASA Astrophysics Data System (ADS)

    Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos

    2015-04-01

    Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  5. Modeling Air Traffic Management Technologies with a Queuing Network Model of the National Airspace System

    NASA Technical Reports Server (NTRS)

    Long, Dou; Lee, David; Johnson, Jesse; Gaier, Eric; Kostiuk, Peter

    1999-01-01

    This report describes an integrated model of air traffic management (ATM) tools under development in two National Aeronautics and Space Administration (NASA) programs -Terminal Area Productivity (TAP) and Advanced Air Transport Technologies (AATT). The model is made by adjusting parameters of LMINET, a queuing network model of the National Airspace System (NAS), which the Logistics Management Institute (LMI) developed for NASA. Operating LMINET with models of various combinations of TAP and AATT will give quantitative information about the effects of the tools on operations of the NAS. The costs of delays under different scenarios are calculated. An extension of Air Carrier Investment Model (ACIM) under ASAC developed by the Institute for NASA maps the technologies' impacts on NASA operations into cross-comparable benefits estimates for technologies and sets of technologies.

  6. A conceptual socio-hydrological model of the co-evolution of humans and water: case study of the Tarim River basin, western China

    NASA Astrophysics Data System (ADS)

    Liu, D.; Tian, F.; Lin, M.; Sivapalan, M.

    2015-02-01

    The complex interactions and feedbacks between humans and water are critically important issues but remain poorly understood in the newly proposed discipline of socio-hydrology (Sivapalan et al., 2012). An exploratory model with the appropriate level of simplification can be valuable for improving our understanding of the co-evolution and self-organization of socio-hydrological systems driven by interactions and feedbacks operating at different scales. In this study, a simplified conceptual socio-hydrological model based on logistic growth curves is developed for the Tarim River basin in western China and is used to illustrate the explanatory power of such a co-evolutionary model. The study area is the main stream of the Tarim River, which is divided into two modeling units. The socio-hydrological system is composed of four sub-systems, i.e., the hydrological, ecological, economic, and social sub-systems. In each modeling unit, the hydrological equation focusing on water balance is coupled to the other three evolutionary equations to represent the dynamics of the social sub-system (denoted by population), the economic sub-system (denoted by irrigated crop area ratio), and the ecological sub-system (denoted by natural vegetation cover), each of which is expressed in terms of a logistic growth curve. Four feedback loops are identified to represent the complex interactions among different sub-systems and different spatial units, of which two are inner loops occurring within each separate unit and the other two are outer loops linking the two modeling units. The feedback mechanisms are incorporated into the constitutive relations for model parameters, i.e., the colonization and mortality rates in the logistic growth curves that are jointly determined by the state variables of all sub-systems. The co-evolution of the Tarim socio-hydrological system is then analyzed with this conceptual model to gain insights into the overall system dynamics and its sensitivity to the external drivers and internal system variables. The results show a costly pendulum swing between a balanced distribution of socio-economic and natural ecologic resources among the upper and lower reaches and a highly skewed distribution towards the upper reach. This evolution is principally driven by the attitudinal changes occurring within water resources management policies that reflect the evolving community awareness of society to concerns regarding the ecology and environment.

  7. New robust statistical procedures for the polytomous logistic regression models.

    PubMed

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  8. Modeling Population Growth and Extinction

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2009-01-01

    The exponential growth model and the logistic model typically introduced in the mathematics curriculum presume that a population grows exclusively. In reality, species can also die out and more sophisticated models that take the possibility of extinction into account are needed. In this article, two extensions of the logistic model are considered,…

  9. GIS-based spatial decision support system for grain logistics management

    NASA Astrophysics Data System (ADS)

    Zhen, Tong; Ge, Hongyi; Jiang, Yuying; Che, Yi

    2010-07-01

    Grain logistics is the important component of the social logistics, which can be attributed to frequent circulation and the great quantity. At present time, there is no modern grain logistics distribution management system, and the logistics cost is the high. Geographic Information Systems (GIS) have been widely used for spatial data manipulation and model operations and provide effective decision support through its spatial database management capabilities and cartographic visualization. In the present paper, a spatial decision support system (SDSS) is proposed to support policy makers and to reduce the cost of grain logistics. The system is composed of two major components: grain logistics goods tracking model and vehicle routing problem optimization model and also allows incorporation of data coming from external sources. The proposed system is an effective tool to manage grain logistics in order to increase the speed of grain logistics and reduce the grain circulation cost.

  10. A Survival Model for Shortleaf Pine Tress Growing in Uneven-Aged Stands

    Treesearch

    Thomas B. Lynch; Lawrence R. Gering; Michael M. Huebschmann; Paul A. Murphy

    1999-01-01

    A survival model for shortleaf pine (Pinus echinata Mill.) trees growing in uneven-aged stands was developed using data from permanently established plots maintained by an industrial forestry company in western Arkansas. Parameters were fitted to a logistic regression model with a Bernoulli dependent variable in which "0" represented...

  11. Interactions Between Item Content And Group Membership on Achievement Test Items.

    ERIC Educational Resources Information Center

    Linn, Robert L.; Harnisch, Delwyn L.

    The purpose of this investigation was to examine the interaction of item content and group membership on achievement test items. Estimates of the parameters of the three parameter logistic model were obtained on the 46 item math test for the sample of eighth grade students (N = 2055) participating in the Illinois Inventory of Educational Progress,…

  12. Sequential Computerized Mastery Tests--Three Simulation Studies

    ERIC Educational Resources Information Center

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  13. Wildfire Risk Mapping over the State of Mississippi: Land Surface Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooke, William H.; Mostovoy, Georgy; Anantharaj, Valentine G

    2012-01-01

    Three fire risk indexes based on soil moisture estimates were applied to simulate wildfire probability over the southern part of Mississippi using the logistic regression approach. The fire indexes were retrieved from: (1) accumulated difference between daily precipitation and potential evapotranspiration (P-E); (2) top 10 cm soil moisture content simulated by the Mosaic land surface model; and (3) the Keetch-Byram drought index (KBDI). The P-E, KBDI, and soil moisture based indexes were estimated from gridded atmospheric and Mosaic-simulated soil moisture data available from the North American Land Data Assimilation System (NLDAS-2). Normalized deviations of these indexes from the 31-year meanmore » (1980-2010) were fitted into the logistic regression model describing probability of wildfires occurrence as a function of the fire index. It was assumed that such normalization provides more robust and adequate description of temporal dynamics of soil moisture anomalies than the original (not normalized) set of indexes. The logistic model parameters were evaluated for 0.25 x0.25 latitude/longitude cells and for probability representing at least one fire event occurred during 5 consecutive days. A 23-year (1986-2008) forest fires record was used. Two periods were selected and examined (January mid June and mid September December). The application of the logistic model provides an overall good agreement between empirical/observed and model-fitted fire probabilities over the study area during both seasons. The fire risk indexes based on the top 10 cm soil moisture and KBDI have the largest impact on the wildfire odds (increasing it by almost 2 times in response to each unit change of the corresponding fire risk index during January mid June period and by nearly 1.5 times during mid September-December) observed over 0.25 x0.25 cells located along the state of Mississippi Coast line. This result suggests a rather strong control of fire risk indexes on fire occurrence probability over this region.« less

  14. A new model for simulating microbial cyanide production and optimizing the medium parameters for recovering precious metals from waste printed circuit boards.

    PubMed

    Yuan, Zhihui; Ruan, Jujun; Li, Yaying; Qiu, Rongliang

    2018-04-10

    Bioleaching is a green recycling technology for recovering precious metals from waste printed circuit boards (WPCBs). However, this technology requires increasing cyanide production to obtain desirable recovery efficiency. Luria-Bertani medium (LB medium, containing tryptone 10 g/L, yeast extract 5 g/L, NaCl 10 g/L) was commonly used in bioleaching of precious metal. In this study, results showed that LB medium did not produce highest yield of cyanide. Under optimal culture conditions (25 °C, pH 7.5), the maximum cyanide yield of the optimized medium (containing tryptone 6 g/L and yeast extract 5 g/L) was 1.5 times as high as that of LB medium. In addition, kinetics and relationship of cell growth and cyanide production was studied. Data of cell growth fitted logistics model well. Allometric model was demonstrated effective in describing relationship between cell growth and cyanide production. By inserting logistics equation into allometric equation, we got a novel hybrid equation containing five parameters. Kinetic data for cyanide production were well fitted to the new model. Model parameters reflected both cell growth and cyanide production process. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. The Use of Logistics n the Quality Parameters Control System of Material Flow

    ERIC Educational Resources Information Center

    Karpova, Natalia P.; Toymentseva, Irina A.; Shvetsova, Elena V.; Chichkina, Vera D.; Chubarkova, Elena V.

    2016-01-01

    The relevance of the research problem is conditioned on the need to justify the use of the logistics methodologies in the quality parameters control process of material flows. The goal of the article is to develop theoretical principles and practical recommendations for logistical system control in material flows quality parameters. A leading…

  16. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.

    PubMed

    Reid, Stephen; Tibshirani, Rob

    2014-07-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.

  17. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package

    PubMed Central

    Reid, Stephen; Tibshirani, Rob

    2014-01-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587

  18. Evaluation of bacterial run and tumble motility parameters through trajectory analysis

    NASA Astrophysics Data System (ADS)

    Liang, Xiaomeng; Lu, Nanxi; Chang, Lin-Ching; Nguyen, Thanh H.; Massoudieh, Arash

    2018-04-01

    In this paper, a method for extraction of the behavior parameters of bacterial migration based on the run and tumble conceptual model is described. The methodology is applied to the microscopic images representing the motile movement of flagellated Azotobacter vinelandii. The bacterial cells are considered to change direction during both runs and tumbles as is evident from the movement trajectories. An unsupervised cluster analysis was performed to fractionate each bacterial trajectory into run and tumble segments, and then the distribution of parameters for each mode were extracted by fitting mathematical distributions best representing the data. A Gaussian copula was used to model the autocorrelation in swimming velocity. For both run and tumble modes, Gamma distribution was found to fit the marginal velocity best, and Logistic distribution was found to represent better the deviation angle than other distributions considered. For the transition rate distribution, log-logistic distribution and log-normal distribution, respectively, was found to do a better job than the traditionally agreed exponential distribution. A model was then developed to mimic the motility behavior of bacteria at the presence of flow. The model was applied to evaluate its ability to describe observed patterns of bacterial deposition on surfaces in a micro-model experiment with an approach velocity of 200 μm/s. It was found that the model can qualitatively reproduce the attachment results of the micro-model setting.

  19. Principal component analysis-based pattern analysis of dose-volume histograms and influence on rectal toxicity.

    PubMed

    Söhn, Matthias; Alber, Markus; Yan, Di

    2007-09-01

    The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as "eigenmodes," which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe approximately 94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses ( approximately 40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches.

  20. Vehicle Scheduling Schemes for Commercial and Emergency Logistics Integration

    PubMed Central

    Li, Xiaohui; Tan, Qingmei

    2013-01-01

    In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models. PMID:24391724

  1. Vehicle scheduling schemes for commercial and emergency logistics integration.

    PubMed

    Li, Xiaohui; Tan, Qingmei

    2013-01-01

    In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models.

  2. Stochastic foundations in nonlinear density-regulation growth

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Assaf, Michael; Horsthemke, Werner; Campos, Daniel

    2017-08-01

    In this work we construct individual-based models that give rise to the generalized logistic model at the mean-field deterministic level and that allow us to interpret the parameters of these models in terms of individual interactions. We also study the effect of internal fluctuations on the long-time dynamics for the different models that have been widely used in the literature, such as the theta-logistic and Savageau models. In particular, we determine the conditions for population extinction and calculate the mean time to extinction. If the population does not become extinct, we obtain analytical expressions for the population abundance distribution. Our theoretical results are based on WKB theory and the probability generating function formalism and are verified by numerical simulations.

  3. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  4. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  5. One parameter family of master equations for logistic growth and BCM theory

    NASA Astrophysics Data System (ADS)

    De Oliveira, L. R.; Castellani, C.; Turchetti, G.

    2015-02-01

    We propose a one parameter family of master equations, for the evolution of a population, having the logistic equation as mean field limit. The parameter α determines the relative weight of linear versus nonlinear terms in the population number n ⩽ N entering the loss term. By varying α from 0 to 1 the equilibrium distribution changes from maximum growth to almost extinction. The former is a Gaussian centered at n = N, the latter is a power law peaked at n = 1. A bimodal distribution is observed in the transition region. When N grows and tends to ∞, keeping the value of α fixed, the distribution tends to a Gaussian centered at n = N whose limit is a delta function corresponding to the stable equilibrium of the mean field equation. The choice of the master equation in this family depends on the equilibrium distribution for finite values of N. The presence of an absorbing state for n = 0 does not change this picture since the extinction mean time grows exponentially fast with N. As a consequence for α close to zero extinction is not observed, whereas when α approaches 1 the relaxation to a power law is observed before extinction occurs. We extend this approach to a well known model of synaptic plasticity, the so called BCM theory in the case of a single neuron with one or two synapses.

  6. Phase-synchronisation in continuous flow models of production networks

    NASA Astrophysics Data System (ADS)

    Scholz-Reiter, Bernd; Tervo, Jan Topi; Freitag, Michael

    2006-04-01

    To improve their position at the market, many companies concentrate on their core competences and hence cooperate with suppliers and distributors. Thus, between many independent companies strong linkages develop and production and logistics networks emerge. These networks are characterised by permanently increasing complexity, and are nowadays forced to adapt to dynamically changing markets. This factor complicates an enterprise-spreading production planning and control enormously. Therefore, a continuous flow model for production networks will be derived regarding these special logistic problems. Furthermore, phase-synchronisation effects will be presented and their dependencies to the set of network parameters will be investigated.

  7. Impact of brown adipose tissue on body fatness and glucose metabolism in healthy humans.

    PubMed

    Matsushita, M; Yoneshiro, T; Aita, S; Kameya, T; Sugie, H; Saito, M

    2014-06-01

    Brown adipose tissue (BAT) is involved in the regulation of whole-body energy expenditure and adiposity. Some clinical studies have reported an association between BAT and blood glucose in humans. To examine the impact of BAT on glucose metabolism, independent of that of body fatness, age and sex in healthy adult humans. Two hundred and sixty healthy volunteers (184 males and 76 females, 20-72 years old) underwent fluorodeoxyglucose-positron emission tomography and computed tomography after 2 h of cold exposure to assess maximal BAT activity. Blood parameters including glucose, HbA1c and low-density lipoprotein (LDL)/high-density lipoprotein-cholesterol were measured by conventional methods, and body fatness was estimated from body mass index (BMI), body fat mass and abdominal fat area. The impact of BAT on body fatness and blood parameters was determined by logistic regression with the use of univariate and multivariate models. Cold-activated BAT was detected in 125 (48%) out of 260 subjects. When compared with subjects without detectable BAT, those with detectable BAT were younger and showed lower adiposity-related parameters such as the BMI, body fat mass and abdominal fat area. Although blood parameters were within the normal range in the two subject groups, HbA1c, total cholesterol and LDL-cholesterol were significantly lower in the BAT-positive group. Blood glucose also tended to be lower in the BAT-positive group. Logistic regression demonstrated that BAT, in addition to age and sex, was independently associated with BMI, body fat mass, and abdominal visceral and subcutaneous fat areas. For blood parameters, multivariate analysis after adjustment for age, sex and body fatness revealed that BAT was a significantly independent determinant of glucose and HbA1c. BAT, independent of age, sex and body fatness, has a significant impact on glucose metabolism in adult healthy humans.

  8. Modeling, analysis, and simulation of the co-development of road networks and vehicle ownership

    NASA Astrophysics Data System (ADS)

    Xu, Mingtao; Ye, Zhirui; Shan, Xiaofeng

    2016-01-01

    A two-dimensional logistic model is proposed to describe the co-development of road networks and vehicle ownership. The endogenous interaction between road networks and vehicle ownership and how natural market forces and policies transformed into their co-development are considered jointly in this model. If the involved parameters satisfy a certain condition, the proposed model can arrive at a steady equilibrium level and the final development scale will be within the maximum capacity of an urban traffic system; otherwise, the co-development process will be unstable and even manifest chaotic behavior. Then sensitivity tests are developed to determine the proper values for a series of parameters in this model. Finally, a case study, using Beijing City as an example, is conducted to explore the applicability of the proposed model to the real condition. Results demonstrate that the proposed model can effectively simulate the co-development of road network and vehicle ownership for Beijing City. Furthermore, we can obtain that their development process will arrive at a stable equilibrium level in the years 2040 and 2045 respectively, and the equilibrium values are within the maximum capacity.

  9. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  10. A Comparison of Exposure Control Procedures in CAT Systems Based on Different Measurement Models for Testlets

    ERIC Educational Resources Information Center

    Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven

    2013-01-01

    This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…

  11. The Association between Bone Quality and Atherosclerosis: Results from Two Large Population-Based Studies

    PubMed Central

    Lange, V.; Dörr, M.; Schminke, U.; Völzke, H.; Nauck, M.; Wallaschofski, H.

    2017-01-01

    Objective It is highly debated whether associations between osteoporosis and atherosclerosis are independent of cardiovascular risk factors. We aimed to explore the associations between quantitative ultrasound (QUS) parameters at the heel with the carotid artery intima-media thickness (IMT), the presence of carotid artery plaques, and the ankle-brachial index (ABI). Methods The study population comprised 5680 men and women aged 20–93 years from two population-based cohort studies: Study of Health in Pomerania (SHIP) and SHIP-Trend. QUS measurements were performed at the heel. The extracranial carotid arteries were examined with B-mode ultrasonography. ABI was measured in a subgroup of 3853 participants. Analyses of variance and linear and logistic regression models were calculated and adjusted for major cardiovascular risk factors. Results Men but not women had significantly increased odds for carotid artery plaques with decreasing QUS parameters independent of diabetes mellitus, dyslipidemia, and hypertension. Beyond this, the QUS parameters were not significantly associated with IMT or ABI in fully adjusted models. Conclusions Our data argue against an independent role of bone metabolism in atherosclerotic changes in women. Yet, in men, associations with advanced atherosclerosis, exist. Thus, men presenting with clinical signs of osteoporosis may be at increased risk for atherosclerotic disease. PMID:28852407

  12. Modeling the rheological behavior of thermosonic extracted guava, pomelo, and soursop juice concentrates at different concentration and temperature using a new combination model

    PubMed Central

    Abdullah, Norazlin; Yusof, Yus A.; Talib, Rosnita A.

    2017-01-01

    Abstract This study has modeled the rheological behavior of thermosonic extracted pink‐fleshed guava, pink‐fleshed pomelo, and soursop juice concentrates at different concentrations and temperatures. The effects of concentration on consistency coefficient (K) and flow behavior index (n) of the fruit juice concentrates was modeled using a master curve which utilized the concentration‐temperature shifting to allow a general prediction of rheological behaviors covering a wide concentration. For modeling the effects of temperature on K and n, the integration of two functions from the Arrhenius and logistic sigmoidal growth equations has provided a new model which gave better description of the properties. It also alleviated the problems of negative region when using the Arrhenius model alone. The fitted regression using this new model has improved coefficient of determination, R 2 values above 0.9792 as compared to using the Arrhenius and logistic sigmoidal models alone, which presented minimum R 2 of 0.6243 and 0.9440, respectively. Practical applications In general, juice concentrate is a better form of food for transportation, preservation, and ingredient. Models are necessary to predict the effects of processing factors such as concentration and temperature on the rheological behavior of juice concentrates. The modeling approach allows prediction of behaviors and determination of processing parameters. The master curve model introduced in this study simplifies and generalized rheological behavior of juice concentrates over a wide range of concentration when temperature factor is insignificant. The proposed new mathematical model from the combination of the Arrhenius and logistic sigmoidal growth models has improved and extended description of rheological properties of fruit juice concentrates. It also solved problems of negative values of consistency coefficient and flow behavior index prediction using existing model, the Arrhenius equation. These rheological data modeling provide good information for the juice processing and equipment manufacturing needs. PMID:29479123

  13. Modeling Governance KB with CATPCA to Overcome Multicollinearity in the Logistic Regression

    NASA Astrophysics Data System (ADS)

    Khikmah, L.; Wijayanto, H.; Syafitri, U. D.

    2017-04-01

    The problem often encounters in logistic regression modeling are multicollinearity problems. Data that have multicollinearity between explanatory variables with the result in the estimation of parameters to be bias. Besides, the multicollinearity will result in error in the classification. In general, to overcome multicollinearity in regression used stepwise regression. They are also another method to overcome multicollinearity which involves all variable for prediction. That is Principal Component Analysis (PCA). However, classical PCA in only for numeric data. Its data are categorical, one method to solve the problems is Categorical Principal Component Analysis (CATPCA). Data were used in this research were a part of data Demographic and Population Survey Indonesia (IDHS) 2012. This research focuses on the characteristic of women of using the contraceptive methods. Classification results evaluated using Area Under Curve (AUC) values. The higher the AUC value, the better. Based on AUC values, the classification of the contraceptive method using stepwise method (58.66%) is better than the logistic regression model (57.39%) and CATPCA (57.39%). Evaluation of the results of logistic regression using sensitivity, shows the opposite where CATPCA method (99.79%) is better than logistic regression method (92.43%) and stepwise (92.05%). Therefore in this study focuses on major class classification (using a contraceptive method), then the selected model is CATPCA because it can raise the level of the major class model accuracy.

  14. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    PubMed Central

    Lin, Wei-Chun; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ 50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  15. Investigating the effect of invasion characteristics on onion thrips (Thysanoptera: Thripidae) populations in onions with a temperature-driven process model.

    PubMed

    Mo, Jianhua; Stevens, Mark; Liu, De Li; Herron, Grant

    2009-12-01

    A temperature-driven process model was developed to describe the seasonal patterns of populations of onion thrips, Thrips tabaci Lindeman, in onions. The model used daily cohorts (individuals of the same developmental stage and daily age) as the population unit. Stage transitions were modeled as a logistic function of accumulated degree-days to account for variability in development rate among individuals. Daily survival was modeled as a logistic function of daily mean temperature. Parameters for development, survival, and fecundity were estimated from published data. A single invasion event was used to initiate the population process, starting at 1-100 d after onion emergence (DAE) for 10-100 d at the daily rate of 0.001-0.9 adults/plant/d. The model was validated against five observed seasonal patterns of onion thrips populations from two unsprayed sites in the Riverina, New South Wales, Australia, during 2003-2006. Performance of the model was measured by a fit index based on the proportion of variations in observed data explained by the model (R (2)) and the differences in total thrips-days between observed and predicted populations. Satisfactory matching between simulated and observed seasonal patterns was obtained within the ranges of invasion parameters tested. Model best-fit was obtained at invasion starting dates of 6-98 DAE with a daily invasion rate of 0.002-0.2 adults/plant/d and an invasion duration of 30-100 d. Under the best-fit invasion scenarios, the model closely reproduced the observed seasonal patterns, explaining 73-95% of variability in adult and larval densities during population increase periods. The results showed that small invasions of adult thrips followed by a gradual population build-up of thrips within onion crops were sufficient to bring about the observed seasonal patterns of onion thrips populations in onion. Implications of the model on timing of chemical controls are discussed.

  16. Cognitive Psychology Meets Psychometric Theory: On the Relation between Process Models for Decision Making and Latent Variable Models for Individual Differences

    ERIC Educational Resources Information Center

    van der Maas, Han L. J.; Molenaar, Dylan; Maris, Gunter; Kievit, Rogier A.; Borsboom, Denny

    2011-01-01

    This article analyzes latent variable models from a cognitive psychology perspective. We start by discussing work by Tuerlinckx and De Boeck (2005), who proved that a diffusion model for 2-choice response processes entails a 2-parameter logistic item response theory (IRT) model for individual differences in the response data. Following this line…

  17. Analysing biomass torrefaction supply chain costs.

    PubMed

    Svanberg, Martin; Olofsson, Ingemar; Flodén, Jonas; Nordin, Anders

    2013-08-01

    The objective of the present work was to develop a techno-economic system model to evaluate how logistics and production parameters affect the torrefaction supply chain costs under Swedish conditions. The model consists of four sub-models: (1) supply system, (2) a complete energy and mass balance of drying, torrefaction and densification, (3) investment and operating costs of a green field, stand-alone torrefaction pellet plant, and (4) distribution system to the gate of an end user. The results show that the torrefaction supply chain reaps significant economies of scale up to a plant size of about 150-200 kiloton dry substance per year (ktonDS/year), for which the total supply chain costs accounts to 31.8 euro per megawatt hour based on lower heating value (€/MWhLHV). Important parameters affecting total cost are amount of available biomass, biomass premium, logistics equipment, biomass moisture content, drying technology, torrefaction mass yield and torrefaction plant capital expenditures (CAPEX). Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. A comparison of item response models for accuracy and speed of item responses with applications to adaptive testing.

    PubMed

    van Rijn, Peter W; Ali, Usama S

    2017-05-01

    We compare three modelling frameworks for accuracy and speed of item responses in the context of adaptive testing. The first framework is based on modelling scores that result from a scoring rule that incorporates both accuracy and speed. The second framework is the hierarchical modelling approach developed by van der Linden (2007, Psychometrika, 72, 287) in which a regular item response model is specified for accuracy and a log-normal model for speed. The third framework is the diffusion framework in which the response is assumed to be the result of a Wiener process. Although the three frameworks differ in the relation between accuracy and speed, one commonality is that the marginal model for accuracy can be simplified to the two-parameter logistic model. We discuss both conditional and marginal estimation of model parameters. Models from all three frameworks were fitted to data from a mathematics and spelling test. Furthermore, we applied a linear and adaptive testing mode to the data off-line in order to determine differences between modelling frameworks. It was found that a model from the scoring rule framework outperformed a hierarchical model in terms of model-based reliability, but the results were mixed with respect to correlations with external measures. © 2017 The British Psychological Society.

  19. In vitro differential diagnosis of clavus and verruca by a predictive model generated from electrical impedance.

    PubMed

    Hung, Chien-Ya; Sun, Pei-Lun; Chiang, Shu-Jen; Jaw, Fu-Shan

    2014-01-01

    Similar clinical appearances prevent accurate diagnosis of two common skin diseases, clavus and verruca. In this study, electrical impedance is employed as a novel tool to generate a predictive model for differentiating these two diseases. We used 29 clavus and 28 verruca lesions. To obtain impedance parameters, a LCR-meter system was applied to measure capacitance (C), resistance (Re), impedance magnitude (Z), and phase angle (θ). These values were combined with lesion thickness (d) to characterize the tissue specimens. The results from clavus and verruca were then fitted to a univariate logistic regression model with the generalized estimating equations (GEE) method. In model generation, log ZSD and θSD were formulated as predictors by fitting a multiple logistic regression model with the same GEE method. The potential nonlinear effects of covariates were detected by fitting generalized additive models (GAM). Moreover, the model was validated by the goodness-of-fit (GOF) assessments. Significant mean differences of the index d, Re, Z, and θ are found between clavus and verruca (p<0.001). A final predictive model is established with Z and θ indices. The model fits the observed data quite well. In GOF evaluation, the area under the receiver operating characteristics (ROC) curve is 0.875 (>0.7), the adjusted generalized R2 is 0.512 (>0.3), and the p value of the Hosmer-Lemeshow GOF test is 0.350 (>0.05). This technique promises to provide an approved model for differential diagnosis of clavus and verruca. It could provide a rapid, relatively low-cost, safe and non-invasive screening tool in clinic use.

  20. Flexibility evaluation of multiechelon supply chains.

    PubMed

    Almeida, João Flávio de Freitas; Conceição, Samuel Vieira; Pinto, Luiz Ricardo; de Camargo, Ricardo Saraiva; Júnior, Gilberto de Miranda

    2018-01-01

    Multiechelon supply chains are complex logistics systems that require flexibility and coordination at a tactical level to cope with environmental uncertainties in an efficient and effective manner. To cope with these challenges, mathematical programming models are developed to evaluate supply chain flexibility. However, under uncertainty, supply chain models become complex and the scope of flexibility analysis is generally reduced. This paper presents a unified approach that can evaluate the flexibility of a four-echelon supply chain via a robust stochastic programming model. The model simultaneously considers the plans of multiple business divisions such as marketing, logistics, manufacturing, and procurement, whose goals are often conflicting. A numerical example with deterministic parameters is presented to introduce the analysis, and then, the model stochastic parameters are considered to evaluate flexibility. The results of the analysis on supply, manufacturing, and distribution flexibility are presented. Tradeoff analysis of demand variability and service levels is also carried out. The proposed approach facilitates the adoption of different management styles, thus improving supply chain resilience. The model can be extended to contexts pertaining to supply chain disruptions; for example, the model can be used to explore operation strategies when subtle events disrupt supply, manufacturing, or distribution.

  1. Growth models of Rhizophora mangle L. seedlings in tropical southwestern Atlantic

    NASA Astrophysics Data System (ADS)

    Lima, Karen Otoni de Oliveira; Tognella, Mônica Maria Pereira; Cunha, Simone Rabelo; Andrade, Humber Agrelli de

    2018-07-01

    The present study selected and compared regression models that best describe the growth curves of Rhizophora mangle seedlings based on the height (cm) and time (days) variables. The Linear, Exponential, Power Law, Monomolecular, Logistic, and Gompertz models were adjusted with non-linear formulations and minimization of the sum of the squares of the residues. The Akaike Information Criterion was used to select the best model for each seedling. After this selection, the determination coefficient, which evaluates how well a model describes height variation as a time function, was inspected. Differing from the classic population ecology studies, the Monomolecular, Three-parameter Logistic, and Gompertz models presented the best performance in describing growth, suggesting they are the most adequate options for long-term studies. The different growth curves reflect the complexity of stem growth at the seedling stage for R. mangle. The analysis of the joint distribution of the parameters initial height, growth rate, and, asymptotic size allowed the study of the species ecological attributes and to observe its intraspecific variability in each model. Our results provide a basis for interpretation of the dynamics of seedlings growth during their establishment in a mature forest, as well as its regeneration processes.

  2. Flexibility evaluation of multiechelon supply chains

    PubMed Central

    Conceição, Samuel Vieira; Pinto, Luiz Ricardo; de Camargo, Ricardo Saraiva; Júnior, Gilberto de Miranda

    2018-01-01

    Multiechelon supply chains are complex logistics systems that require flexibility and coordination at a tactical level to cope with environmental uncertainties in an efficient and effective manner. To cope with these challenges, mathematical programming models are developed to evaluate supply chain flexibility. However, under uncertainty, supply chain models become complex and the scope of flexibility analysis is generally reduced. This paper presents a unified approach that can evaluate the flexibility of a four-echelon supply chain via a robust stochastic programming model. The model simultaneously considers the plans of multiple business divisions such as marketing, logistics, manufacturing, and procurement, whose goals are often conflicting. A numerical example with deterministic parameters is presented to introduce the analysis, and then, the model stochastic parameters are considered to evaluate flexibility. The results of the analysis on supply, manufacturing, and distribution flexibility are presented. Tradeoff analysis of demand variability and service levels is also carried out. The proposed approach facilitates the adoption of different management styles, thus improving supply chain resilience. The model can be extended to contexts pertaining to supply chain disruptions; for example, the model can be used to explore operation strategies when subtle events disrupt supply, manufacturing, or distribution. PMID:29584755

  3. A hybrid solution approach for a multi-objective closed-loop logistics network under uncertainty

    NASA Astrophysics Data System (ADS)

    Mehrbod, Mehrdad; Tu, Nan; Miao, Lixin

    2015-06-01

    The design of closed-loop logistics (forward and reverse logistics) has attracted growing attention with the stringent pressures of customer expectations, environmental concerns and economic factors. This paper considers a multi-product, multi-period and multi-objective closed-loop logistics network model with regard to facility expansion as a facility location-allocation problem, which more closely approximates real-world conditions. A multi-objective mixed integer nonlinear programming formulation is linearized by defining new variables and adding new constraints to the model. By considering the aforementioned model under uncertainty, this paper develops a hybrid solution approach by combining an interactive fuzzy goal programming approach and robust counterpart optimization based on three well-known robust counterpart optimization formulations. Finally, this paper compares the results of the three formulations using different test scenarios and parameter-sensitive analysis in terms of the quality of the final solution, CPU time, the level of conservatism, the degree of closeness to the ideal solution, the degree of balance involved in developing a compromise solution, and satisfaction degree.

  4. MARSnet: Mission-aware Autonomous Radar Sensor Network for Future Combat Systems

    DTIC Science & Technology

    2007-05-03

    34Parameter estimation for 3-parameter log-logistic distribution (LLD3) by Porne ", Parameter estimation for 3-parameter log-logistic distribu- tion...section V we physical security, air traffic control, traffic monitoring, andvidefaconu s cribedy. video surveillance, industrial automation etc. Each

  5. Quantitative fibrosis parameters highly predict esophageal-gastro varices in primary biliary cirrhosis.

    PubMed

    Wu, Q-M; Zhao, X-Y; You, H

    2016-01-01

    Esophageal-gastro Varices (EGV) may develop in any histological stages of primary biliary cirrhosis (PBC). We aim to establish and validate quantitative fibrosis (qFibrosis) parameters in portal, septal and fibrillar areas as ideal predictors of EGV in PBC patients. PBC patients with liver biopsy, esophagogastroscopy and Second Harmonic Generation (SHG)/Two-photon Excited Fluorescence (TPEF) microscopy images were retrospectively enrolled in this study. qFibrosis parameters in portal, septal and fibrillar areas were acquired by computer-assisted SHG/TPEF imaging system. Independent predictor was identified using multivariate logistic regression analysis. PBC patients with liver biopsy, esophagogastroscopy and Second Harmonic Generation (SHG)/Two-photon Excited Fluorescence (TPEF) microscopy images were retrospectively enrolled in this study. qFibrosis parameters in portal, septal and fibrillar areas were acquired by computer-assisted SHG/TPEF imaging system. Independent predictor was identified using multivariate logistic regression analysis. Among the forty-nine PBC patients with qFibrosis images, twenty-nine PBC patients with both esophagogastroscopy data and qFibrosis data were selected out for EGV prognosis analysis and 44.8% (13/29) of them had EGV. The qFibrosis parameters of collagen percentage and number of crosslink in fibrillar area, short/long/thin strings number and length/width of the strings in septa area were associated with EGV (p < 0.05). Multivariate logistic analysis showed that the collagen percentage in fibrillar area ≥ 3.6% was an independent factor to predict EGV (odds ratio 6.9; 95% confidence interval 1.6-27.4). The area under receiver operating characteristic (ROC), diagnostic sensitivity and specificity was 0.9, 100% and 75% respectively. Collagen percentage in Collagen percentage in the fibrillar area as an independent predictor can highly predict EGV in PBC patients.

  6. [Individual growth modeling of the penshell Atrina maura (Bivalvia: Pinnidae) using a multi model inference approach].

    PubMed

    Aragón-Noriega, Eugenio Alberto

    2013-09-01

    Growth models of marine animals, for fisheries and/or aquaculture purposes, are based on the popular von Bertalanffy model. This tool is mostly used because its parameters are used to evaluate other fisheries models, such as yield per recruit; nevertheless, there are other alternatives (such as Gompertz, Logistic, Schnute) not yet used by fishery scientists, that may result useful depending on the studied species. The penshell Atrina maura, has been studied for fisheries or aquaculture supplies, but its individual growth has not yet been studied before. The aim of this study was to model the absolute growth of the penshell A. maura using length-age data. For this, five models were assessed to obtain growth parameters: von Bertalanffy, Gompertz, Logistic, Schnute case 1 and Schnute and Richards. The criterion used to select the best models was the Akaike information criterion, as well as the residual squared sum and R2 adjusted. To get the average asymptotic length, the multi model inference approach was used. According to Akaike information criteria, the Gompertz model better described the absolute growth of A. maura. Following the multi model inference approach the average asymptotic shell length was 218.9 mm (IC 212.3-225.5) of shell length. I concluded that the use of the multi model approach and the Akaike information criteria represented the most robust method for growth parameter estimation of A. maura and the von Bertalanffy growth model should not be selected a priori as the true model to obtain the absolute growth in bivalve mollusks like in the studied species in this paper.

  7. glmnetLRC f/k/a lrc package: Logistic Regression Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-06-09

    Methods for fitting and predicting logistic regression classifiers (LRC) with an arbitrary loss function using elastic net or best subsets. This package adds additional model fitting features to the existing glmnet and bestglm R packages. This package was created to perform the analyses described in Amidan BG, Orton DJ, LaMarche BL, et al. 2014. Signatures for Mass Spectrometry Data Quality. Journal of Proteome Research. 13(4), 2215-2222. It makes the model fitting available in the glmnet and bestglm packages more general by identifying optimal model parameters via cross validation with an customizable loss function. It also identifies the optimal threshold formore » binary classification.« less

  8. A Comparison of Exposure Control Procedures in CATs Using the 3PL Model

    ERIC Educational Resources Information Center

    Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.

    2013-01-01

    This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…

  9. Conditions for the return and simulation of the recovery of burrowing mayflies in western Lake Erie

    USGS Publications Warehouse

    Kolar, Cynthia S.; Hudson, Patrick L.; Savino, Jacqueline F.

    1997-01-01

    In the 1950s, burrowing mayflies, Hexagenia spp. (H. Limbata and H. Rigida), were virtually eliminated from the western basin of Lake Erie (a 3300 kmA? area) because of eutrophication and pollution. We develop and present a deterministic model for the recolonization of the western basin by Hexagenia to pre-1953 densities. The model was based on the logistic equation describing the population growth of Hexagenia and a presumed competitor, Chironomus (dipteran larvae). Other parameters (immigration, low oxygen, toxic sediments, competition with Chironomus, and fish predation) were then individually added to the logistic model to determine their effect at different growth rates. The logistic model alone predicts 10-41 yr for Hexagenia to recolonize western Lake Erie. Immigration reduced the recolonization time by 2-17 yr. One low-oxygen event during the first 20 yr increased recovery time by 5-17 yr. Contaminated sediments added 5-11 yr to the recolonization time. Competition with Chironomus added 8-19 yr to recovery. Fish predators added 4-47 yr to the time required for recolonization. The full model predicted 48-81 yr for Hexagenia to reach a carrying capacity of approximately 350 nymphs/mA?, or not until around the year 2038 if the model is started in 1990. The model was verified by changing model parameters to those present in 1970, beginning the model in 1970 and running it through 1990. Predicted densities overlapped almost completely with actual estimated densities of Hexagenia nymphs present in the western basin in Lake Erie in 1990. The model suggests that recovery of large aquatic ecosystems may lag substantially behind remediation efforts.

  10. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  11. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  12. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  13. Risk of malnutrition (over and under-nutrition): validation of the JaNuS screening tool.

    PubMed

    Donini, Lorenzo M; Ricciardi, Laura Maria; Neri, Barbara; Lenzi, Andrea; Marchesini, Giulio

    2014-12-01

    Malnutrition (over and under-nutrition) is highly prevalent in patients admitted to hospital and it is a well-known risk factor for increased morbidity and mortality. Nutritional problems are often misdiagnosed, and especially the coexistence of over and undernutrition is not usually recognized. We aimed to develop and validate a screening tool for the easy detection and reporting of both undernutrition and overnutrition, specifically identifying the clinical conditions where the two types of malnutrition coexist. The study consisted of three phases: 1) selection of an appropriate study population (estimation sample) and of the hospital admission parameters to identify overnutrition and undernutrition; 2) combination of selected variables to create a screening tool to assess the nutritional risk in case of undernutrition, overnutrition, or the copresence of both the conditions, to be used by non-specialist health care professionals; 3) validation of the screening tool in a different patient sample (validation sample). Two groups of variables (12 for undernutrition, 7 for overnutrition) were identified in separate logistic models for their correlation with the outcome variables. Both models showed high efficacy, sensitivity and specificity (overnutrition, 97.7%, 99.6%, 66.6%, respectively; undernutrition, 84.4%, 83.6%, 84.8%). The logistic models were used to construct a two-faced test (named JaNuS - Just A Nutritional Screening) fitting into a two-dimension Cartesian coordinate graphic system. In the validation sample the JaNuS test confirmed its predictive value. Internal consistency and test-retest analysis provide evidence for the reliability of the test. The study provides a screening tool for the assessment of the nutritional risk, based on parameters easy-to-use by health care personnel lacking nutritional competence and characterized by excellent predictive validity. The test might be confidently applied in the clinical setting to determine the importance of malnutrition (including the copresence of over and undernutrition) as a risk factor for morbidity and mortality. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  14. Use of Three-Parameter Item Response Theory in the Development of CTBS, Form U, and TCS.

    ERIC Educational Resources Information Center

    Yen, Wendy M.

    The three-parameter logistic model discussed was used by CTB/McGraw-Hill in the development of the Comprehensive Tests of Basic Skills, Form U (CTBS/U) and the Test of Cognitive Skills (TCS), published in the fall of 1981. The development, standardization, and scoring of the tests are described, particularly as these procedures were influenced by…

  15. Determining factors influencing survival of breast cancer by fuzzy logistic regression model.

    PubMed

    Nikbakht, Roya; Bahrampour, Abbas

    2017-01-01

    Fuzzy logistic regression model can be used for determining influential factors of disease. This study explores the important factors of actual predictive survival factors of breast cancer's patients. We used breast cancer data which collected by cancer registry of Kerman University of Medical Sciences during the period of 2000-2007. The variables such as morphology, grade, age, and treatments (surgery, radiotherapy, and chemotherapy) were applied in the fuzzy logistic regression model. Performance of model was determined in terms of mean degree of membership (MDM). The study results showed that almost 41% of patients were in neoplasm and malignant group and more than two-third of them were still alive after 5-year follow-up. Based on the fuzzy logistic model, the most important factors influencing survival were chemotherapy, morphology, and radiotherapy, respectively. Furthermore, the MDM criteria show that the fuzzy logistic regression have a good fit on the data (MDM = 0.86). Fuzzy logistic regression model showed that chemotherapy is more important than radiotherapy in survival of patients with breast cancer. In addition, another ability of this model is calculating possibilistic odds of survival in cancer patients. The results of this study can be applied in clinical research. Furthermore, there are few studies which applied the fuzzy logistic models. Furthermore, we recommend using this model in various research areas.

  16. Logistics of a Lunar Based Solar Power Satellite Scenario

    NASA Technical Reports Server (NTRS)

    Melissopoulos, Stefanos

    1995-01-01

    A logistics system comprised of two orbital stations for the support of a 500 GW space power satellite scenario in a geostationary orbit was investigated in this study. A subsystem mass model, a mass flow model and a life cycle cost model were developed. The results regarding logistics cost and burden rates show that the transportation cost contributed the most (96%) to the overall cost of the scenario. The orbital stations at a geostationary and at a lunar orbit contributed 4 % to that cost.

  17. Orthotopic bladder substitution in men revisited: identification of continence predictors.

    PubMed

    Koraitim, M M; Atta, M A; Foda, M K

    2006-11-01

    We determined the impact of the functional characteristics of the neobladder and urethral sphincter on continence results, and determined the most significant predictors of continence. A total of 88 male patients 29 to 70 years old underwent orthotopic bladder substitution with tubularized ileocecal segment (40) and detubularized sigmoid (25) or ileum (23). Uroflowmetry, cystometry and urethral pressure profilometry were performed at 13 to 36 months (mean 19) postoperatively. The correlation between urinary continence and 28 urodynamic variables was assessed. Parameters that correlated significantly with continence were entered into a multivariate analysis using a logistic regression model to determine the most significant predictors of continence. Maximum urethral closure pressure was the only parameter that showed a statistically significant correlation with diurnal continence. Nocturnal continence had not only a statistically significant positive correlation with maximum urethral closure pressure, but also statistically significant negative correlations with maximum contraction amplitude, and baseline pressure at mid and maximum capacity. Three of these 4 parameters, including maximum urethral closure pressure, maximum contraction amplitude and baseline pressure at mid capacity, proved to be significant predictors of continence on multivariate analysis. While daytime continence is determined by maximum urethral closure pressure, during the night it is the net result of 2 forces that have about equal influence but in opposite directions, that is maximum urethral closure pressure vs maximum contraction amplitude plus baseline pressure at mid capacity. Two equations were derived from the logistic regression model to predict the probability of continence after orthotopic bladder substitution, including Z1 (diurnal) = 0.605 + 0.0085 maximum urethral closure pressure and Z2 (nocturnal) = 0.841 + 0.01 [maximum urethral closure pressure - (maximum contraction amplitude + baseline pressure at mid capacity)].

  18. Logistic Stick-Breaking Process

    PubMed Central

    Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.

    2013-01-01

    A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593

  19. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  20. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  1. Speech prosody impairment predicts cognitive decline in Parkinson's disease.

    PubMed

    Rektorova, Irena; Mekyska, Jiri; Janousova, Eva; Kostalova, Milena; Eliasova, Ilona; Mrackova, Martina; Berankova, Dagmar; Necasova, Tereza; Smekal, Zdenek; Marecek, Radek

    2016-08-01

    Impairment of speech prosody is characteristic for Parkinson's disease (PD) and does not respond well to dopaminergic treatment. We assessed whether baseline acoustic parameters, alone or in combination with other predominantly non-dopaminergic symptoms may predict global cognitive decline as measured by the Addenbrooke's cognitive examination (ACE-R) and/or worsening of cognitive status as assessed by a detailed neuropsychological examination. Forty-four consecutive non-depressed PD patients underwent clinical and cognitive testing, and acoustic voice analysis at baseline and at the two-year follow-up. Influence of speech and other clinical parameters on worsening of the ACE-R and of the cognitive status was analyzed using linear and logistic regression. The cognitive status (classified as normal cognition, mild cognitive impairment and dementia) deteriorated in 25% of patients during the follow-up. The multivariate linear regression model consisted of the variation in range of the fundamental voice frequency (F0VR) and the REM Sleep Behavioral Disorder Screening Questionnaire (RBDSQ). These parameters explained 37.2% of the variability of the change in ACE-R. The most significant predictors in the univariate logistic regression were the speech index of rhythmicity (SPIR; p = 0.012), disease duration (p = 0.019), and the RBDSQ (p = 0.032). The multivariate regression analysis revealed that SPIR alone led to 73.2% accuracy in predicting a change in cognitive status. Combining SPIR with RBDSQ improved the prediction accuracy of SPIR alone by 7.3%. Impairment of speech prosody together with symptoms of RBD predicted rapid cognitive decline and worsening of PD cognitive status during a two-year period. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Gaussian Process Regression Model in Spatial Logistic Regression

    NASA Astrophysics Data System (ADS)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  3. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  4. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross application model yields reasonable results which can be used for preliminary landslide hazard mapping.

  5. A comparison of methods of fitting several models to nutritional response data.

    PubMed

    Vedenov, D; Pesti, G M

    2008-02-01

    A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.

  6. Quantifying the yellow signal driver behavior based on naturalistic data from digital enforcement cameras.

    PubMed

    Bar-Gera, H; Musicant, O; Schechtman, E; Ze'evi, T

    2016-11-01

    The yellow signal driver behavior, reflecting the dilemma zone behavior, is analyzed using naturalistic data from digital enforcement cameras. The key variable in the analysis is the entrance time after the yellow onset, and its distribution. This distribution can assist in determining two critical outcomes: the safety outcome related to red-light-running angle accidents, and the efficiency outcome. The connection to other approaches for evaluating the yellow signal driver behavior is also discussed. The dataset was obtained from 37 digital enforcement cameras at non-urban signalized intersections in Israel, over a period of nearly two years. The data contain more than 200 million vehicle entrances, of which 2.3% (∼5million vehicles) entered the intersection during the yellow phase. In all non-urban signalized intersections in Israel the green phase ends with 3s of flashing green, followed by 3s of yellow. In most non-urban signalized roads in Israel the posted speed limit is 90km/h. Our analysis focuses on crossings during the yellow phase and the first 1.5s of the red phase. The analysis method consists of two stages. In the first stage we tested whether the frequency of crossings is constant at the beginning of the yellow phase. We found that the pattern was stable (i.e., the frequencies were constant) at 18 intersections, nearly stable at 13 intersections and unstable at 6 intersections. In addition to the 6 intersections with unstable patterns, two other outlying intersections were excluded from subsequent analysis. Logistic regression models were fitted for each of the remaining 29 intersection. We examined both standard (exponential) logistic regression and four parameters logistic regression. The results show a clear advantage for the former. The estimated parameters show that the time when the frequency of crossing reduces to half ranges from1.7 to 2.3s after yellow onset. The duration of the reduction of the relative frequency from 0.9 to 0.1 ranged from 1.9 to 2.9s. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Effect of Item Response Theory (IRT) Model Selection on Testlet-Based Test Equating. Research Report. ETS RR-14-19

    ERIC Educational Resources Information Center

    Cao, Yi; Lu, Ru; Tao, Wei

    2014-01-01

    The local item independence assumption underlying traditional item response theory (IRT) models is often not met for tests composed of testlets. There are 3 major approaches to addressing this issue: (a) ignore the violation and use a dichotomous IRT model (e.g., the 2-parameter logistic [2PL] model), (b) combine the interdependent items to form a…

  8. Modelling a stochastic HIV model with logistic target cell growth and nonlinear immune response function

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Jiang, Daqing; Alsaedi, Ahmed; Hayat, Tasawar

    2018-07-01

    A stochastic HIV viral model with both logistic target cell growth and nonlinear immune response function is formulated to investigate the effect of white noise on each population. The existence of the global solution is verified. By employing a novel combination of Lyapunov functions, we obtain the existence of the unique stationary distribution for small white noises. We also derive the extinction of the virus for large white noises. Numerical simulations are performed to highlight the effect of white noises on model dynamic behaviour under the realistic parameters. It is found that the small intensities of white noises can keep the irregular blips of HIV virus and CTL immune response, while the larger ones force the virus infection and immune response to lose efficacy.

  9. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Treesearch

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  10. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  11. A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Jin, Cong

    2017-03-01

    In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.

  12. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  13. R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Saputro, Dewi Retno Sari

    2017-03-01

    GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".

  14. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy.

    PubMed

    Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L

    2017-05-07

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  15. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy

    NASA Astrophysics Data System (ADS)

    Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.

    2017-05-01

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  16. Generalized Smooth Transition Map Between Tent and Logistic Maps

    NASA Astrophysics Data System (ADS)

    Sayed, Wafaa S.; Fahmy, Hossam A. H.; Rezk, Ahmed A.; Radwan, Ahmed G.

    There is a continuous demand on novel chaotic generators to be employed in various modeling and pseudo-random number generation applications. This paper proposes a new chaotic map which is a general form for one-dimensional discrete-time maps employing the power function with the tent and logistic maps as special cases. The proposed map uses extra parameters to provide responses that fit multiple applications for which conventional maps were not enough. The proposed generalization covers also maps whose iterative relations are not based on polynomials, i.e. with fractional powers. We introduce a framework for analyzing the proposed map mathematically and predicting its behavior for various combinations of its parameters. In addition, we present and explain the transition map which results in intermediate responses as the parameters vary from their values corresponding to tent map to those corresponding to logistic map case. We study the properties of the proposed map including graph of the map equation, general bifurcation diagram and its key-points, output sequences, and maximum Lyapunov exponent. We present further explorations such as effects of scaling, system response with respect to the new parameters, and operating ranges other than transition region. Finally, a stream cipher system based on the generalized transition map validates its utility for image encryption applications. The system allows the construction of more efficient encryption keys which enhances its sensitivity and other cryptographic properties.

  17. Alternative approach to modeling bacterial lag time, using logistic regression as a function of time, temperature, pH, and sodium chloride concentration.

    PubMed

    Koseki, Shige; Nonaka, Junko

    2012-09-01

    The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation.

  18. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  19. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  20. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    PubMed

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.

  1. Combined pressure-thermal inactivation effect on spores in lu-wei beef--a traditional Chinese meat product.

    PubMed

    Wang, B-S; Li, B-S; Du, J-Z; Zeng, Q-X

    2015-08-01

    This study investigated the inactivation effect and kinetics of Bacillus coagulans and Geobacillus stearothermophilus spores suspended in lu-wei beef by combining high pressure (500 and 600 MPa) and moderate heat (70 and 80 °C or 80 and 90 °C). During pressurization, the temperature of pressure-transmitting fluid was tested with a K-type thermocouple, and the number of surviving cells was determined by a plate count method. The pressure come-up time and corresponding inactivation of Bacillus coagulans and G. stearothermophilus spores were considered during the pressure-thermal treatment. For the two types of spores, the results showed a higher inactivation effect in phosphate buffer solution than that in lu-wei beef. Among the bacteria evaluated, G. stearothermophilus spores had a higher resistance than B. coagulans spores during the pressure-thermal processing. One linear model and two nonlinear models (i.e. the Weibull and log-logistic models) were fitted to the survivor data to obtain relevant kinetic parameters, and the performance of these models was compared. The results suggested that the survival curve of the spores could be accurately described utilizing the log-logistic model, which produced the best fit for all inactivation data. The compression heating characteristics of different pressure-transmitting fluids should be considered when using high pressure to sterilize spores, particularly while the pressure is increasing. Spores can be inactivated by combining high pressure and moderate heat. The study demonstrates the synergistic inactivation effect of moderate heat in combination with high pressure in real-life food. The use of mathematical models to predict the inactivation for spores could help the food industry further to develop optimum process conditions. © 2015 The Society for Applied Microbiology.

  2. Stability of Intercellular Exchange of Biochemical Substances Affected by Variability of Environmental Parameters

    NASA Astrophysics Data System (ADS)

    Mihailović, Dragutin T.; Budinčević, Mirko; Balaž, Igor; Mihailović, Anja

    Communication between cells is realized by exchange of biochemical substances. Due to internal organization of living systems and variability of external parameters, the exchange is heavily influenced by perturbations of various parameters at almost all stages of the process. Since communication is one of essential processes for functioning of living systems it is of interest to investigate conditions for its stability. Using previously developed simplified model of bacterial communication in a form of coupled difference logistic equations we investigate stability of exchange of signaling molecules under variability of internal and external parameters.

  3. Order-of-magnitude estimates of latency (time to appearance) and refill time of a cancer from a single cancer 'stem' cell compared by an exponential and a logistic equation.

    PubMed

    Anderson, Ken M; Rubenstein, Marvin; Guinan, Patrick; Patel, Minu

    2012-01-01

    The time required before a mass of cancer cells considered to have originated from a single malignantly transformed cancer 'stem' cell reaches a certain number has not been studied. Applications might include determination of the time the cell mass reaches a size that can be detected by X-rays or physical examination or modeling growth rates in vitro in order to compare with other models or established data. We employed a simple logarithmic equation and a common logistic equation incorporating 'feedback' for unknown variables of cell birth, growth, division, and death that can be used to model cell proliferation. It can be used in association with free or commercial statistical software. Results with these two equations, varying the proliferation rate, nominally reduced by generational cell loss, are presented in two tables. The resulting equation, instructions, examples, and necessary mathematical software are available in the online appendix, where several parameters of interest can be modified by the reader www.uic.edu/nursing/publicationsupplements/tobillion_Anderson_Rubenstein_Guinan_Patel1.pdf. Reducing the proliferation rate by whatever alterations employed, markedly increases the time to reach 10(9) cells originating from an initial progenitor. In thinking about multistep oncogenesis, it is useful to consider the profound effect that variations in the effective proliferation rate may have during cancer development. This can be approached with the proposed equation, which is easy to use and available to further peer fine-tuning to be used in future modeling of cell growth.

  4. Operations and Modeling Analysis

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles

    2005-01-01

    The Reliability and Maintainability Analysis Tool (RMAT) provides NASA the capability to estimate reliability and maintainability (R&M) parameters and operational support requirements for proposed space vehicles based upon relationships established from both aircraft and Shuttle R&M data. RMAT has matured both in its underlying database and in its level of sophistication in extrapolating this historical data to satisfy proposed mission requirements, maintenance concepts and policies, and type of vehicle (i.e. ranging from aircraft like to shuttle like). However, a companion analyses tool, the Logistics Cost Model (LCM) has not reached the same level of maturity as RMAT due, in large part, to nonexistent or outdated cost estimating relationships and underlying cost databases, and it's almost exclusive dependence on Shuttle operations and logistics cost input parameters. As a result, the full capability of the RMAT/LCM suite of analysis tools to take a conceptual vehicle and derive its operations and support requirements along with the resulting operating and support costs has not been realized.

  5. Estimating the Prevalence of Atrial Fibrillation from A Three-Class Mixture Model for Repeated Diagnoses

    PubMed Central

    Li, Liang; Mao, Huzhang; Ishwaran, Hemant; Rajeswaran, Jeevanantham; Ehrlinger, John; Blackstone, Eugene H.

    2016-01-01

    Atrial fibrillation (AF) is an abnormal heart rhythm characterized by rapid and irregular heart beat, with or without perceivable symptoms. In clinical practice, the electrocardiogram (ECG) is often used for diagnosis of AF. Since the AF often arrives as recurrent episodes of varying frequency and duration and only the episodes that occur at the time of ECG can be detected, the AF is often underdiagnosed when a limited number of repeated ECGs are used. In studies evaluating the efficacy of AF ablation surgery, each patient undergo multiple ECGs and the AF status at the time of ECG is recorded. The objective of this paper is to estimate the marginal proportions of patients with or without AF in a population, which are important measures of the efficacy of the treatment. The underdiagnosis problem is addressed by a three-class mixture regression model in which a patient’s probability of having no AF, paroxysmal AF, and permanent AF is modeled by auxiliary baseline covariates in a nested logistic regression. A binomial regression model is specified conditional on a subject being in the paroxysmal AF group. The model parameters are estimated by the EM algorithm. These parameters are themselves nuisance parameters for the purpose of this research, but the estimators of the marginal proportions of interest can be expressed as functions of the data and these nuisance parameters and their variances can be estimated by the sandwich method. We examine the performance of the proposed methodology in simulations and two real data applications. PMID:27983754

  6. Estimating the prevalence of atrial fibrillation from a three-class mixture model for repeated diagnoses.

    PubMed

    Li, Liang; Mao, Huzhang; Ishwaran, Hemant; Rajeswaran, Jeevanantham; Ehrlinger, John; Blackstone, Eugene H

    2017-03-01

    Atrial fibrillation (AF) is an abnormal heart rhythm characterized by rapid and irregular heartbeat, with or without perceivable symptoms. In clinical practice, the electrocardiogram (ECG) is often used for diagnosis of AF. Since the AF often arrives as recurrent episodes of varying frequency and duration and only the episodes that occur at the time of ECG can be detected, the AF is often underdiagnosed when a limited number of repeated ECGs are used. In studies evaluating the efficacy of AF ablation surgery, each patient undergoes multiple ECGs and the AF status at the time of ECG is recorded. The objective of this paper is to estimate the marginal proportions of patients with or without AF in a population, which are important measures of the efficacy of the treatment. The underdiagnosis problem is addressed by a three-class mixture regression model in which a patient's probability of having no AF, paroxysmal AF, and permanent AF is modeled by auxiliary baseline covariates in a nested logistic regression. A binomial regression model is specified conditional on a subject being in the paroxysmal AF group. The model parameters are estimated by the Expectation-Maximization (EM) algorithm. These parameters are themselves nuisance parameters for the purpose of this research, but the estimators of the marginal proportions of interest can be expressed as functions of the data and these nuisance parameters and their variances can be estimated by the sandwich method. We examine the performance of the proposed methodology in simulations and two real data applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Analyzing the Administration Perception of the Teachers by Means of Logistic Regression According to Values

    ERIC Educational Resources Information Center

    Ugurlu, Celal Teyyar

    2017-01-01

    This study aims to analyze the administration perception of the teachers according to values in line with certain parameters. The model of the research is relational screening model. The population is applied to 470 teachers who work in 25 secondary schools at the center of Sivas with scales. 317 questionnaires which had been returned have been…

  8. An Alternative to the 3PL: Using Asymmetric Item Characteristic Curves to Address Guessing Effects

    ERIC Educational Resources Information Center

    Lee, Sora; Bolt, Daniel M.

    2018-01-01

    Both the statistical and interpretational shortcomings of the three-parameter logistic (3PL) model in accommodating guessing effects on multiple-choice items are well documented. We consider the use of a residual heteroscedasticity (RH) model as an alternative, and compare its performance to the 3PL with real test data sets and through simulation…

  9. Stretched exponential dynamics of coupled logistic maps on a small-world network

    NASA Astrophysics Data System (ADS)

    Mahajan, Ashwini V.; Gade, Prashant M.

    2018-02-01

    We investigate the dynamic phase transition from partially or fully arrested state to spatiotemporal chaos in coupled logistic maps on a small-world network. Persistence of local variables in a coarse grained sense acts as an excellent order parameter to study this transition. We investigate the phase diagram by varying coupling strength and small-world rewiring probability p of nonlocal connections. The persistent region is a compact region bounded by two critical lines where band-merging crisis occurs. On one critical line, the persistent sites shows a nonexponential (stretched exponential) decay for all p while for another one, it shows crossover from nonexponential to exponential behavior as p → 1 . With an effectively antiferromagnetic coupling, coupling to two neighbors on either side leads to exchange frustration. Apart from exchange frustration, non-bipartite topology and nonlocal couplings in a small-world network could be a reason for anomalous relaxation. The distribution of trap times in asymptotic regime has a long tail as well. The dependence of temporal evolution of persistence on initial conditions is studied and a scaling form for persistence after waiting time is proposed. We present a simple possible model for this behavior.

  10. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  11. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison

    PubMed Central

    Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.

    2015-01-01

    Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546

  12. Matched samples logistic regression in case-control studies with missing values: when to break the matches.

    PubMed

    Hansson, Lisbeth; Khamis, Harry J

    2008-12-01

    Simulated data sets are used to evaluate conditional and unconditional maximum likelihood estimation in an individual case-control design with continuous covariates when there are different rates of excluded cases and different levels of other design parameters. The effectiveness of the estimation procedures is measured by method bias, variance of the estimators, root mean square error (RMSE) for logistic regression and the percentage of explained variation. Conditional estimation leads to higher RMSE than unconditional estimation in the presence of missing observations, especially for 1:1 matching. The RMSE is higher for the smaller stratum size, especially for the 1:1 matching. The percentage of explained variation appears to be insensitive to missing data, but is generally higher for the conditional estimation than for the unconditional estimation. It is particularly good for the 1:2 matching design. For minimizing RMSE, a high matching ratio is recommended; in this case, conditional and unconditional logistic regression models yield comparable levels of effectiveness. For maximizing the percentage of explained variation, the 1:2 matching design with the conditional logistic regression model is recommended.

  13. Linear and non-linear kinetics in the synthesis and degradation of acrylamide in foods and model systems.

    PubMed

    Corradini, Maria G; Peleg, Micha

    2006-01-01

    Isothermal acrylamide formation in foods and asparagine-glucose model systems has ubiquitous features. On a time scale of about 60 min, at temperatures in the approximate range of 120-160 degrees C, the acrylamide concentration-time curve has a characteristic sigmoid shape whose asymptotic level and steepness increases with temperature while the time that corresponds to the inflection point decreases. In the approximate range of 160-200 degrees C, the curve has a clear peak, whose onset, height, width and degree of asymmetry depend on the system's composition and temperature. The synthesis-degradation of acrylamide in model systems has been recently described by traditional kinetic models. They account for the intermediate stages of the process and the fate of reactants involved at different levels of scrutiny. The resulting models have 2-6 rate constants, accounting for both the generation and elimination of the acrylamide. Their temperature dependence has been assumed to obey the Arrhenius equation, i.e., each step in the reaction was considered as having a fixed energy of activation. A proposed alternative is constructing the concentration curve by superimposing a Fermian decay term on a logistic growth function. The resulting model, which is not unique, has five parameters: a hypothetical uninterrupted generation-level, two steepness parameters; of the concentration climbs and fall and two time characteristics; of the acrylamide synthesis and elimination. According to this model, peak concentration is observed only when the two time constants are comparable. The peak's shape and height are determined by the gap between the two time constants and the relative magnitudes of the two "rate" parameters. The concept can be extended to create models of non-isothermal acrylamide formation. The basic assumption, which is yet to be verified experimentally, is that the momentary rate of the acrylamide synthesis or degradation is the isothermal rate at the momentary temperature, at a time that corresponds to its momentary concentration. The theoretical capabilities of a model of this kind are demonstrated with computer simulations. If the described model is correct, then by controlling temperature history, it is possible to reduce the acrylamide while still accomplishing much of the desirable effects of a heat process.

  14. The Trend Odds Model for Ordinal Data‡

    PubMed Central

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  15. The trend odds model for ordinal data.

    PubMed

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  16. A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery - part II: an illustrative example.

    PubMed

    Cevenini, Gabriele; Barbini, Emanuela; Scolletta, Sabino; Biagioli, Bonizella; Giomarelli, Pierpaolo; Barbini, Paolo

    2007-11-22

    Popular predictive models for estimating morbidity probability after heart surgery are compared critically in a unitary framework. The study is divided into two parts. In the first part modelling techniques and intrinsic strengths and weaknesses of different approaches were discussed from a theoretical point of view. In this second part the performances of the same models are evaluated in an illustrative example. Eight models were developed: Bayes linear and quadratic models, k-nearest neighbour model, logistic regression model, Higgins and direct scoring systems and two feed-forward artificial neural networks with one and two layers. Cardiovascular, respiratory, neurological, renal, infectious and hemorrhagic complications were defined as morbidity. Training and testing sets each of 545 cases were used. The optimal set of predictors was chosen among a collection of 78 preoperative, intraoperative and postoperative variables by a stepwise procedure. Discrimination and calibration were evaluated by the area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test, respectively. Scoring systems and the logistic regression model required the largest set of predictors, while Bayesian and k-nearest neighbour models were much more parsimonious. In testing data, all models showed acceptable discrimination capacities, however the Bayes quadratic model, using only three predictors, provided the best performance. All models showed satisfactory generalization ability: again the Bayes quadratic model exhibited the best generalization, while artificial neural networks and scoring systems gave the worst results. Finally, poor calibration was obtained when using scoring systems, k-nearest neighbour model and artificial neural networks, while Bayes (after recalibration) and logistic regression models gave adequate results. Although all the predictive models showed acceptable discrimination performance in the example considered, the Bayes and logistic regression models seemed better than the others, because they also had good generalization and calibration. The Bayes quadratic model seemed to be a convincing alternative to the much more usual Bayes linear and logistic regression models. It showed its capacity to identify a minimum core of predictors generally recognized as essential to pragmatically evaluate the risk of developing morbidity after heart surgery.

  17. Describing complex cells in primary visual cortex: a comparison of context and multi-filter LN models.

    PubMed

    Westö, Johan; May, Patrick J C

    2018-05-02

    Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.

  18. PREDICTION OF MALIGNANT BREAST LESIONS FROM MRI FEATURES: A COMPARISON OF ARTIFICIAL NEURAL NETWORK AND LOGISTIC REGRESSION TECHNIQUES

    PubMed Central

    McLaren, Christine E.; Chen, Wen-Pin; Nie, Ke; Su, Min-Ying

    2009-01-01

    Rationale and Objectives Dynamic contrast enhanced MRI (DCE-MRI) is a clinical imaging modality for detection and diagnosis of breast lesions. Analytical methods were compared for diagnostic feature selection and performance of lesion classification to differentiate between malignant and benign lesions in patients. Materials and Methods The study included 43 malignant and 28 benign histologically-proven lesions. Eight morphological parameters, ten gray level co-occurrence matrices (GLCM) texture features, and fourteen Laws’ texture features were obtained using automated lesion segmentation and quantitative feature extraction. Artificial neural network (ANN) and logistic regression analysis were compared for selection of the best predictors of malignant lesions among the normalized features. Results Using ANN, the final four selected features were compactness, energy, homogeneity, and Law_LS, with area under the receiver operating characteristic curve (AUC) = 0.82, and accuracy = 0.76. The diagnostic performance of these 4-features computed on the basis of logistic regression yielded AUC = 0.80 (95% CI, 0.688 to 0.905), similar to that of ANN. The analysis also shows that the odds of a malignant lesion decreased by 48% (95% CI, 25% to 92%) for every increase of 1 SD in the Law_LS feature, adjusted for differences in compactness, energy, and homogeneity. Using logistic regression with z-score transformation, a model comprised of compactness, NRL entropy, and gray level sum average was selected, and it had the highest overall accuracy of 0.75 among all models, with AUC = 0.77 (95% CI, 0.660 to 0.880). When logistic modeling of transformations using the Box-Cox method was performed, the most parsimonious model with predictors, compactness and Law_LS, had an AUC of 0.79 (95% CI, 0.672 to 0.898). Conclusion The diagnostic performance of models selected by ANN and logistic regression was similar. The analytic methods were found to be roughly equivalent in terms of predictive ability when a small number of variables were chosen. The robust ANN methodology utilizes a sophisticated non-linear model, while logistic regression analysis provides insightful information to enhance interpretation of the model features. PMID:19409817

  19. Blastocoele expansion degree predicts live birth after single blastocyst transfer for fresh and vitrified/warmed single blastocyst transfer cycles.

    PubMed

    Du, Qing-Yun; Wang, En-Yin; Huang, Yan; Guo, Xiao-Yi; Xiong, Yu-Jing; Yu, Yi-Ping; Yao, Gui-Dong; Shi, Sen-Lin; Sun, Ying-Pu

    2016-04-01

    To evaluate the independent effects of the degree of blastocoele expansion and re-expansion and the inner cell mass (ICM) and trophectoderm (TE) grades on predicting live birth after fresh and vitrified/warmed single blastocyst transfer. Retrospective study. Reproductive medical center. Women undergoing 844 fresh and 370 vitrified/warmed single blastocyst transfer cycles. None. Live-birth rate correlated with blastocyst morphology parameters by logistic regression analysis and Spearman correlations analysis. The degree of blastocoele expansion and re-expansion was the only blastocyst morphology parameter that exhibited a significant ability to predict live birth in both fresh and vitrified/warmed single blastocyst transfer cycles respectively by multivariate logistic regression and Spearman correlations analysis. Although the ICM grade was significantly related to live birth in fresh cycles according to the univariate model, its effect was not maintained in the multivariate logistic analysis. In vitrified/warmed cycles, neither ICM nor TE grade was correlated with live birth by logistic regression analysis. This study is the first to confirm that the degree of blastocoele expansion and re-expansion is a better predictor of live birth after both fresh and vitrified/warmed single blastocyst transfer cycles than ICM or TE grade. Copyright © 2016. Published by Elsevier Inc.

  20. A nonparametric multiple imputation approach for missing categorical data.

    PubMed

    Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh

    2017-06-06

    Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.

  1. Hyperspectral imaging technique for determination of pork freshness attributes

    NASA Astrophysics Data System (ADS)

    Li, Yongyu; Zhang, Leilei; Peng, Yankun; Tang, Xiuying; Chao, Kuanglin; Dhakal, Sagar

    2011-06-01

    Freshness of pork is an important quality attribute, which can vary greatly in storage and logistics. The specific objectives of this research were to develop a hyperspectral imaging system to predict pork freshness based on quality attributes such as total volatile basic-nitrogen (TVB-N), pH value and color parameters (L*,a*,b*). Pork samples were packed in seal plastic bags and then stored at 4°C. Every 12 hours. Hyperspectral scattering images were collected from the pork surface at the range of 400 nm to 1100 nm. Two different methods were performed to extract scattering feature spectra from the hyperspectral scattering images. First, the spectral scattering profiles at individual wavelengths were fitted accurately by a three-parameter Lorentzian distribution (LD) function; second, reflectance spectra were extracted from the scattering images. Partial Least Square Regression (PLSR) method was used to establish prediction models to predict pork freshness. The results showed that the PLSR models based on reflectance spectra was better than combinations of LD "parameter spectra" in prediction of TVB-N with a correlation coefficient (r) = 0.90, a standard error of prediction (SEP) = 7.80 mg/100g. Moreover, a prediction model for pork freshness was established by using a combination of TVB-N, pH and color parameters. It could give a good prediction results with r = 0.91 for pork freshness. The research demonstrated that hyperspectral scattering technique is a valid tool for real-time and nondestructive detection of pork freshness.

  2. Corruption and economic growth with non constant labor force growth

    NASA Astrophysics Data System (ADS)

    Brianzoni, Serena; Campisi, Giovanni; Russo, Alberto

    2018-05-01

    Based on Brianzoni et al. [1] in the present work we propose an economic model regarding the relationship between corruption in public procurement and economic growth. We extend the benchmark model by introducing endogenous labor force growth, described by the logistic equation. The results of previous studies, as Del Monte and Papagni [2] and Mauro [3], show that countries are stuck in one of the two equilibria (high corruption and low economic growth or low corruption and high economic growth). Brianzoni et al. [1] prove the existence of a further steady state characterized by intermediate levels of capital per capita and corruption. Our aim is to investigate the effects of the endogenous growth around such equilibrium. Moreover, due to the high number of parameters of the model, specific attention is given to the numerical simulations which highlight new policy measures that can be adopted by the government to fight corruption.

  3. Application of Item Response Theory to Tests of Substance-related Associative Memory

    PubMed Central

    Shono, Yusuke; Grenard, Jerry L.; Ames, Susan L.; Stacy, Alan W.

    2015-01-01

    A substance-related word association test (WAT) is one of the commonly used indirect tests of substance-related implicit associative memory and has been shown to predict substance use. This study applied an item response theory (IRT) modeling approach to evaluate psychometric properties of the alcohol- and marijuana-related WATs and their items among 775 ethnically diverse at-risk adolescents. After examining the IRT assumptions, item fit, and differential item functioning (DIF) across gender and age groups, the original 18 WAT items were reduced to 14- and 15-items in the alcohol- and marijuana-related WAT, respectively. Thereafter, unidimensional one- and two-parameter logistic models (1PL and 2PL models) were fitted to the revised WAT items. The results demonstrated that both alcohol- and marijuana-related WATs have good psychometric properties. These results were discussed in light of the framework of a unified concept of construct validity (Messick, 1975, 1989, 1995). PMID:25134051

  4. Combined Effects of Soil Biotic and Abiotic Factors, Influenced by Sewage Sludge Incorporation, on the Incidence of Corn Stalk Rot

    PubMed Central

    Fortes, Nara Lúcia Perondi; Navas-Cortés, Juan A; Silva, Carlos Alberto; Bettiol, Wagner

    2016-01-01

    The objectives of this study were to evaluate the combined effects of soil biotic and abiotic factors on the incidence of Fusarium corn stalk rot, during four annual incorporations of two types of sewage sludge into soil in a 5-years field assay under tropical conditions and to predict the effects of these variables on the disease. For each type of sewage sludge, the following treatments were included: control with mineral fertilization recommended for corn; control without fertilization; sewage sludge based on the nitrogen concentration that provided the same amount of nitrogen as in the mineral fertilizer treatment; and sewage sludge that provided two, four and eight times the nitrogen concentration recommended for corn. Increasing dosages of both types of sewage sludge incorporated into soil resulted in increased corn stalk rot incidence, being negatively correlated with corn yield. A global analysis highlighted the effect of the year of the experiment, followed by the sewage sludge dosages. The type of sewage sludge did not affect the disease incidence. A multiple logistic model using a stepwise procedure was fitted based on the selection of a model that included the three explanatory parameters for disease incidence: electrical conductivity, magnesium and Fusarium population. In the selected model, the probability of higher disease incidence increased with an increase of these three explanatory parameters. When the explanatory parameters were compared, electrical conductivity presented a dominant effect and was the main variable to predict the probability distribution curves of Fusarium corn stalk rot, after sewage sludge application into the soil. PMID:27176597

  5. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    An interactive computer program for automatically generating traffic models for the Space Transportation System (STS) is presented. Information concerning run stream construction, input data, and output data is provided. The flow of the interactive data stream is described. Error messages are specified, along with suggestions for remedial action. In addition, formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  6. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  7. Allocating Fire Mitigation Funds on the Basis of the Predicted Probabilities of Forest Wildfire

    Treesearch

    Ronald E. McRoberts; Greg C. Liknes; Mark D. Nelson; Krista M. Gebert; R. James Barbour; Susan L. Odell; Steven C. Yaddof

    2005-01-01

    A logistic regression model was used with map-based information to predict the probability of forest fire for forested areas of the United States. Model parameters were estimated using a digital layer depicting the locations of wildfires and satellite imagery depicting thermal hotspots. The area of the United States in the upper 50th percentile with respect to...

  8. Detecting DIF in Polytomous Items Using MACS, IRT and Ordinal Logistic Regression

    ERIC Educational Resources Information Center

    Elosua, Paula; Wells, Craig

    2013-01-01

    The purpose of the present study was to compare the Type I error rate and power of two model-based procedures, the mean and covariance structure model (MACS) and the item response theory (IRT), and an observed-score based procedure, ordinal logistic regression, for detecting differential item functioning (DIF) in polytomous items. A simulation…

  9. Bayesian estimation and use of high-throughput remote sensing indices for quantitative genetic analyses of leaf growth.

    PubMed

    Baker, Robert L; Leong, Wen Fung; An, Nan; Brock, Marcus T; Rubin, Matthew J; Welch, Stephen; Weinig, Cynthia

    2018-02-01

    We develop Bayesian function-valued trait models that mathematically isolate genetic mechanisms underlying leaf growth trajectories by factoring out genotype-specific differences in photosynthesis. Remote sensing data can be used instead of leaf-level physiological measurements. Characterizing the genetic basis of traits that vary during ontogeny and affect plant performance is a major goal in evolutionary biology and agronomy. Describing genetic programs that specifically regulate morphological traits can be complicated by genotypic differences in physiological traits. We describe the growth trajectories of leaves using novel Bayesian function-valued trait (FVT) modeling approaches in Brassica rapa recombinant inbred lines raised in heterogeneous field settings. While frequentist approaches estimate parameter values by treating each experimental replicate discretely, Bayesian models can utilize information in the global dataset, potentially leading to more robust trait estimation. We illustrate this principle by estimating growth asymptotes in the face of missing data and comparing heritabilities of growth trajectory parameters estimated by Bayesian and frequentist approaches. Using pseudo-Bayes factors, we compare the performance of an initial Bayesian logistic growth model and a model that incorporates carbon assimilation (A max ) as a cofactor, thus statistically accounting for genotypic differences in carbon resources. We further evaluate two remotely sensed spectroradiometric indices, photochemical reflectance (pri2) and MERIS Terrestrial Chlorophyll Index (mtci) as covariates in lieu of A max , because these two indices were genetically correlated with A max across years and treatments yet allow much higher throughput compared to direct leaf-level gas-exchange measurements. For leaf lengths in uncrowded settings, including A max improves model fit over the initial model. The mtci and pri2 indices also outperform direct A max measurements. Of particular importance for evolutionary biologists and plant breeders, hierarchical Bayesian models estimating FVT parameters improve heritabilities compared to frequentist approaches.

  10. [On the relation between encounter rate and population density: Are classical models of population dynamics justified?].

    PubMed

    Nedorezov, L V

    2015-01-01

    A stochastic model of migrations on a lattice and with discrete time is considered. It is assumed that space is homogenous with respect to its properties and during one time step every individual (independently of local population numbers) can migrate to nearest nodes of lattice with equal probabilities. It is also assumed that population size remains constant during certain time interval of computer experiments. The following variants of estimation of encounter rate between individuals are considered: when for the fixed time moments every individual in every node of lattice interacts with all other individuals in the node; when individuals can stay in nodes independently, or can be involved in groups in two, three or four individuals. For each variant of interactions between individuals, average value (with respect to space and time) is computed for various values of population size. The samples obtained were compared with respective functions of classic models of isolated population dynamics: Verhulst model, Gompertz model, Svirezhev model, and theta-logistic model. Parameters of functions were calculated with least square method. Analyses of deviations were performed using Kolmogorov-Smirnov test, Lilliefors test, Shapiro-Wilk test, and other statistical tests. It is shown that from traditional point of view there are no correspondence between the encounter rate and functions describing effects of self-regulatory mechanisms on population dynamics. Best fitting of samples was obtained with Verhulst and theta-logistic models when using the dataset resulted from the situation when every individual in the node interacts with all other individuals.

  11. Deciphering factors controlling groundwater arsenic spatial variability in Bangladesh

    NASA Astrophysics Data System (ADS)

    Tan, Z.; Yang, Q.; Zheng, C.; Zheng, Y.

    2017-12-01

    Elevated concentrations of geogenic arsenic in groundwater have been found in many countries to exceed 10 μg/L, the WHO's guideline value for drinking water. A common yet unexplained characteristic of groundwater arsenic spatial distribution is the extensive variability at various spatial scales. This study investigates factors influencing the spatial variability of groundwater arsenic in Bangladesh to improve the accuracy of models predicting arsenic exceedance rate spatially. A novel boosted regression tree method is used to establish a weak-learning ensemble model, which is compared to a linear model using a conventional stepwise logistic regression method. The boosted regression tree models offer the advantage of parametric interaction when big datasets are analyzed in comparison to the logistic regression. The point data set (n=3,538) of groundwater hydrochemistry with 19 parameters was obtained by the British Geological Survey in 2001. The spatial data sets of geological parameters (n=13) were from the Consortium for Spatial Information, Technical University of Denmark, University of East Anglia and the FAO, while the soil parameters (n=42) were from the Harmonized World Soil Database. The aforementioned parameters were regressed to categorical groundwater arsenic concentrations below or above three thresholds: 5 μg/L, 10 μg/L and 50 μg/L to identify respective controlling factors. Boosted regression tree method outperformed logistic regression methods in all three threshold levels in terms of accuracy, specificity and sensitivity, resulting in an improvement of spatial distribution map of probability of groundwater arsenic exceeding all three thresholds when compared to disjunctive-kriging interpolated spatial arsenic map using the same groundwater arsenic dataset. Boosted regression tree models also show that the most important controlling factors of groundwater arsenic distribution include groundwater iron content and well depth for all three thresholds. The probability of a well with iron content higher than 5mg/L to contain greater than 5 μg/L, 10 μg/L and 50 μg/L As is estimated to be more than 91%, 85% and 51%, respectively, while the probability of a well from depth more than 160m to contain more than 5 μg/L, 10 μg/L and 50 μg/L As is estimated to be less than 38%, 25% and 14%, respectively.

  12. Logistics Distribution Center Location Evaluation Based on Genetic Algorithm and Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Shao, Yuxiang; Chen, Qing; Wei, Zhenhua

    Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.

  13. Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige

    Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.

  14. Large unbalanced credit scoring using Lasso-logistic regression ensemble.

    PubMed

    Wang, Hong; Xu, Qingsong; Zhou, Lifeng

    2015-01-01

    Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.

  15. Dispersal and spatial heterogeneity: Single species

    USGS Publications Warehouse

    DeAngelis, Donald L.; Ni, Wei-Ming; Zhang, Bo

    2016-01-01

    A recent result for a reaction-diffusion equation is that a population diffusing at any rate in an environment in which resources vary spatially will reach a higher total equilibrium biomass than the population in an environment in which the same total resources are distributed homogeneously. This has so far been proven by Lou for the case in which the reaction term has only one parameter, m(x)">m(x)m(x), varying with spatial location x">xx, which serves as both the intrinsic growth rate coefficient and carrying capacity of the population. However, this striking result seems rather limited when applies to real populations. In order to make the model more relevant for ecologists, we consider a logistic reaction term, with two parameters, r(x)">r(x)r(x) for intrinsic growth rate, and K(x)">K(x)K(x) for carrying capacity. When r(x)">r(x)r(x) and K(x)">K(x)K(x) are proportional, the logistic equation takes a particularly simple form, and the earlier result still holds. In this paper we have established the result for the more general case of a positive correlation between r(x)">r(x)r(x) and K(x)">K(x)K(x) when dispersal rate is small. We review natural and laboratory systems to which these results are relevant and discuss the implications of the results to population theory and conservation ecology.

  16. A Bibliography for the ABLUE.

    DTIC Science & Technology

    1982-06-01

    scale based on two symmetric quantiles. Sankhya A 30, 335-336. [S] Gupta, S. S. and Gnanadesikan , M. (1966). Estimation of the parameters of the logistic...and Cheng (1971, 1972, 1974) Chan, Cheng, Mead and Panjer (1973) Cheng (1975) Eubank (1979, 1981a,b) Gupta and Gnanadesikan (1966) Hassanein (1969b

  17. Computational fluid dynamics (CFD) using porous media modeling predicts recurrence after coiling of cerebral aneurysms.

    PubMed

    Umeda, Yasuyuki; Ishida, Fujimaro; Tsuji, Masanori; Furukawa, Kazuhiro; Shiba, Masato; Yasuda, Ryuta; Toma, Naoki; Sakaida, Hiroshi; Suzuki, Hidenori

    2017-01-01

    This study aimed to predict recurrence after coil embolization of unruptured cerebral aneurysms with computational fluid dynamics (CFD) using porous media modeling (porous media CFD). A total of 37 unruptured cerebral aneurysms treated with coiling were analyzed using follow-up angiograms, simulated CFD prior to coiling (control CFD), and porous media CFD. Coiled aneurysms were classified into stable or recurrence groups according to follow-up angiogram findings. Morphological parameters, coil packing density, and hemodynamic variables were evaluated for their correlations with aneurysmal recurrence. We also calculated residual flow volumes (RFVs), a novel hemodynamic parameter used to quantify the residual aneurysm volume after simulated coiling, which has a mean fluid domain > 1.0 cm/s. Follow-up angiograms showed 24 aneurysms in the stable group and 13 in the recurrence group. Mann-Whitney U test demonstrated that maximum size, dome volume, neck width, neck area, and coil packing density were significantly different between the two groups (P < 0.05). Among the hemodynamic parameters, aneurysms in the recurrence group had significantly larger inflow and outflow areas in the control CFD and larger RFVs in the porous media CFD. Multivariate logistic regression analyses demonstrated that RFV was the only independently significant factor (odds ratio, 1.06; 95% confidence interval, 1.01-1.11; P = 0.016). The study findings suggest that RFV collected under porous media modeling predicts the recurrence of coiled aneurysms.

  18. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  19. The comparison of landslide ratio-based and general logistic regression landslide susceptibility models in the Chishan watershed after 2009 Typhoon Morakot

    NASA Astrophysics Data System (ADS)

    WU, Chunhung

    2015-04-01

    The research built the original logistic regression landslide susceptibility model (abbreviated as or-LRLSM) and landslide ratio-based ogistic regression landslide susceptibility model (abbreviated as lr-LRLSM), compared the performance and explained the error source of two models. The research assumes that the performance of the logistic regression model can be better if the distribution of landslide ratio and weighted value of each variable is similar. Landslide ratio is the ratio of landslide area to total area in the specific area and an useful index to evaluate the seriousness of landslide disaster in Taiwan. The research adopted the landside inventory induced by 2009 Typhoon Morakot in the Chishan watershed, which was the most serious disaster event in the last decade, in Taiwan. The research adopted the 20 m grid as the basic unit in building the LRLSM, and six variables, including elevation, slope, aspect, geological formation, accumulated rainfall, and bank erosion, were included in the two models. The six variables were divided as continuous variables, including elevation, slope, and accumulated rainfall, and categorical variables, including aspect, geological formation and bank erosion in building the or-LRLSM, while all variables, which were classified based on landslide ratio, were categorical variables in building the lr-LRLSM. Because the count of whole basic unit in the Chishan watershed was too much to calculate by using commercial software, the research took random sampling instead of the whole basic units. The research adopted equal proportions of landslide unit and not landslide unit in logistic regression analysis. The research took 10 times random sampling and selected the group with the best Cox & Snell R2 value and Nagelkerker R2 value as the database for the following analysis. Based on the best result from 10 random sampling groups, the or-LRLSM (lr-LRLSM) is significant at the 1% level with Cox & Snell R2 = 0.190 (0.196) and Nagelkerke R2 = 0.253 (0.260). The unit with the landslide susceptibility value > 0.5 (≦ 0.5) will be classified as a predicted landslide unit (not landslide unit). The AUC, i.e. the area under the relative operating characteristic curve, of or-LRLSM in the Chishan watershed is 0.72, while that of lr-LRLSM is 0.77. Furthermore, the average correct ratio of lr-LRLSM (73.3%) is better than that of or-LRLSM (68.3%). The research analyzed in detail the error sources from the two models. In continuous variables, using the landslide ratio-based classification in building the lr-LRLSM can let the distribution of weighted value more similar to distribution of landslide ratio in the range of continuous variable than that in building the or-LRLSM. In categorical variables, the meaning of using the landslide ratio-based classification in building the lr-LRLSM is to gather the parameters with approximate landslide ratio together. The mean correct ratio in continuous variables (categorical variables) by using the lr-LRLSM is better than that in or-LRLSM by 0.6 ~ 2.6% (1.7% ~ 6.0%). Building the landslide susceptibility model by using landslide ratio-based classification is practical and of better performance than that by using the original logistic regression.

  20. Noisy coupled logistic maps in the vicinity of chaos threshold.

    PubMed

    Tirnakli, Ugur; Tsallis, Constantino

    2016-04-01

    We focus on a linear chain of N first-neighbor-coupled logistic maps in the vicinity of their edge of chaos in the presence of a common noise. This model, characterised by the coupling strength ϵ and the noise width σmax, was recently introduced by Pluchino et al. [Phys. Rev. E 87, 022910 (2013)]. They detected, for the time averaged returns with characteristic return time τ, possible connections with q-Gaussians, the distributions which optimise, under appropriate constraints, the nonadditive entropy, Sq, basis of nonextensive statistics mechanics. Here, we take a closer look on this model, and numerically obtain probability distributions which exhibit a slight asymmetry for some parameter values, in variance with simple q-Gaussians. Nevertheless, along many decades, the fitting with q-Gaussians turns out to be numerically very satisfactory for wide regions of the parameter values, and we illustrate how the index q evolves with (N,τ,ϵ,σmax). It is nevertheless instructive on how careful one must be in such numerical analysis. The overall work shows that physical and/or biological systems that are correctly mimicked by this model are thermostatistically related to nonextensive statistical mechanics when time-averaged relevant quantities are studied.

  1. Noisy coupled logistic maps in the vicinity of chaos threshold

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Tsallis, Constantino

    2016-04-01

    We focus on a linear chain of N first-neighbor-coupled logistic maps in the vicinity of their edge of chaos in the presence of a common noise. This model, characterised by the coupling strength ɛ and the noise width σmax, was recently introduced by Pluchino et al. [Phys. Rev. E 87, 022910 (2013)]. They detected, for the time averaged returns with characteristic return time τ, possible connections with q-Gaussians, the distributions which optimise, under appropriate constraints, the nonadditive entropy, Sq, basis of nonextensive statistics mechanics. Here, we take a closer look on this model, and numerically obtain probability distributions which exhibit a slight asymmetry for some parameter values, in variance with simple q-Gaussians. Nevertheless, along many decades, the fitting with q-Gaussians turns out to be numerically very satisfactory for wide regions of the parameter values, and we illustrate how the index q evolves with ( N , τ , ɛ , σ m a x ) . It is nevertheless instructive on how careful one must be in such numerical analysis. The overall work shows that physical and/or biological systems that are correctly mimicked by this model are thermostatistically related to nonextensive statistical mechanics when time-averaged relevant quantities are studied.

  2. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local parameter estimates for all the variables and an important reduction of the autocorrelation in the residuals of the GW linear model. Despite the fitting improvement of local models, GW regression, more than an alternative to "global" or traditional regression modelling, seems to be a valuable complement to explore the non-stationary relationships between the response variable and the explanatory variables. The synergy of global and local modelling provides insights into fire management and policy and helps further our understanding of the fire problem over large areas while at the same time recognizing its local character.

  3. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  4. A decision support model for investment on P2P lending platform.

    PubMed

    Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.

  5. A decision support model for investment on P2P lending platform

    PubMed Central

    Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234

  6. Refining Stimulus Parameters in Assessing Infant Speech Perception Using Visual Reinforcement Infant Speech Discrimination: Sensation Level.

    PubMed

    Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy

    2015-01-01

    Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a repeated measures logistic regression where SL and contrast were used to predict the likelihood of speech discrimination criterion. Infants were able to reach criterion for the /a-i/ contrast at statistically lower SLs when compared to /ba-da/. There were six infants who never reached criterion for /ba-da/ and one never reached criterion for /a-i/. The conditional probability of not reaching criterion by 70 dB SL was 0% for /a-i/ and 21% for /ba-da/. The predictive logistic regression model showed that children were more likely to discriminate the /a-i/ even when controlling for SL. Nearly all normal-hearing infants can demonstrate discrimination criterion of a vowel contrast at 60 dB SL, while a level of ≥70 dB SL may be needed to allow all infants to demonstrate discrimination criterion of a difficult consonant contrast. American Academy of Audiology.

  7. Impact Assessment of Effective Parameters on Drivers' Attention Level to Urban Traffic Signs

    NASA Astrophysics Data System (ADS)

    Kazemi, Mojtaba; Rahimi, Amir Masoud; Roshankhah, Sheida

    2016-03-01

    Traffic signs are one of the oldest safety and traffic control equipments. Drivers' reaction to installed signs is an important issue that could be studied using statistical models developed for target groups. There are 527 questionnaires have been filled up randomly in 45 days, some by drivers passing two northern cities of Iran and some by e-mail. Therefore, minimum sample size of 384 is fulfilled. In addition, Cronbach Alpha of more than 90 % verifies the questionnaire's validity. Ordinal logistic regression is used for 5-level answer variables. This relatively novel method predicts probability of different cases' considering other effective independent variables. There are 18 parameters of factor, man, vehicle, and environment are assessed and 5 parameters of number of accidents in last 5 years, occupation, driving time, number of accidents per day, and driving speed are eventually found as the most important ones. Age and gender, that are considered as key factors in other safety and accident studies, are not recognized as effective ones in this paper. The results could be useful for safety planning programs.

  8. A Situational-Awareness System For Networked Infantry Including An Accelerometer-Based Shot-Identification Algorithm For Direct-Fire Weapons

    DTIC Science & Technology

    2016-09-01

    noise density and temperature sensitivity of these devices are all on the same order of magnitude. Even the worst- case noise density of the GCDC...accelerations from a handgun firing were distinct from other impulsive events on the wrist, such as using a hammer. Loeffler first identified potential shots by...spikes, taking various statistical parameters. He used a logistic regression model on these parameters and was able to classify 98.9% of shots

  9. Temporal association between the influenza virus and respiratory syncytial virus (RSV): RSV as a predictor of seasonal influenza.

    PubMed

    Míguez, A; Iftimi, A; Montes, F

    2016-09-01

    Epidemiologists agree that there is a prevailing seasonality in the presentation of epidemic waves of respiratory syncytial virus (RSV) infections and influenza. The aim of this study is to quantify the potential relationship between the activity of RSV, with respect to the influenza virus, in order to use the RSV seasonal curve as a predictor of the evolution of an influenza virus epidemic wave. Two statistical tools, logistic regression and time series, are used for predicting the evolution of influenza. Both logistic models and time series of influenza consider RSV information from previous weeks. Data consist of influenza and confirmed RSV cases reported in Comunitat Valenciana (Spain) during the period from week 40 (2010) to week 8 (2014). Binomial logistic regression models used to predict the two states of influenza wave, basal or peak, result in a rate of correct classification higher than 92% with the validation set. When a finer three-states categorization is established, basal, increasing peak and decreasing peak, the multinomial logistic model performs well in 88% of cases of the validation set. The ARMAX model fits well for influenza waves and shows good performance for short-term forecasts up to 3 weeks. The seasonal evolution of influenza virus can be predicted a minimum of 4 weeks in advance using logistic models based on RSV. It would be necessary to study more inter-pandemic seasons to establish a stronger relationship between the epidemic waves of both viruses.

  10. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  11. Predicting bacterial growth in raw, salted, and cooked chicken breast fillets during storage.

    PubMed

    Galarz, Liane Aldrighi; Fonseca, Gustavo Graciano; Prentice, Carlos

    2016-09-01

    Growth curves were evaluated for aerobic mesophilic and psychrotrophic bacteria, Pseudomonas spp. and Staphylococcus spp., grown in raw, salted, and cooked chicken breast at 2, 4, 7, 10, 15, and 20 ℃, respectively, using the modified Gompertz and modified logistic models. Shelf life was determined based on microbiological counts and sensory analysis. Temperature increase reduced the shelf life, which varied from 10 to 26 days at 2 ℃, from nine to 21 days at 4 ℃, from six to 12 days at 7 ℃, from four to eight days at 10 ℃, from two to four days at 15 ℃, and from one to two days at 20 ℃. In most cases, cooked chicken breast showed the highest microbial count, followed by raw breast and lastly salted breast. The data obtained here were useful for the generation of mathematical models and parameters. The models presented high correlation and can be used for predictive purposes in the poultry meat supply chain. © The Author(s) 2015.

  12. Evaluating 1-, 2- and 3- Parameter Logistic Models Using Model-Based and Empirically-Based Simulations under Homogeneous and Heterogeneous Set Conditions

    ERIC Educational Resources Information Center

    Rizavi, Saba; Way, Walter D.; Lu, Ying; Pitoniak, Mary; Steffen, Manfred

    2004-01-01

    The purpose of this study was to use realistically simulated data to evaluate various CAT designs for use with the verbal reasoning measure of the Medical College Admissions Test (MCAT). Factors such as item pool depth, content constraints, and item formats often cause repeated adaptive administrations of an item at ability levels that are not…

  13. Differential Item Functioning Analysis Using a Mixture 3-Parameter Logistic Model with a Covariate on the TIMSS 2007 Mathematics Test

    ERIC Educational Resources Information Center

    Choi, Youn-Jeng; Alexeev, Natalia; Cohen, Allan S.

    2015-01-01

    The purpose of this study was to explore what may be contributing to differences in performance in mathematics on the Trends in International Mathematics and Science Study 2007. This was done by using a mixture item response theory modeling approach to first detect latent classes in the data and then to examine differences in performance on items…

  14. Large scale landslide susceptibility assessment using the statistical methods of logistic regression and BSA - study case: the sub-basin of the small Niraj (Transylvania Depression, Romania)

    NASA Astrophysics Data System (ADS)

    Roşca, S.; Bilaşco, Ş.; Petrea, D.; Fodorean, I.; Vescan, I.; Filip, S.; Măguţ, F.-L.

    2015-11-01

    The existence of a large number of GIS models for the identification of landslide occurrence probability makes difficult the selection of a specific one. The present study focuses on the application of two quantitative models: the logistic and the BSA models. The comparative analysis of the results aims at identifying the most suitable model. The territory corresponding to the Niraj Mic Basin (87 km2) is an area characterised by a wide variety of the landforms with their morphometric, morphographical and geological characteristics as well as by a high complexity of the land use types where active landslides exist. This is the reason why it represents the test area for applying the two models and for the comparison of the results. The large complexity of input variables is illustrated by 16 factors which were represented as 72 dummy variables, analysed on the basis of their importance within the model structures. The testing of the statistical significance corresponding to each variable reduced the number of dummy variables to 12 which were considered significant for the test area within the logistic model, whereas for the BSA model all the variables were employed. The predictability degree of the models was tested through the identification of the area under the ROC curve which indicated a good accuracy (AUROC = 0.86 for the testing area) and predictability of the logistic model (AUROC = 0.63 for the validation area).

  15. Population Invariance of Vertical Scaling Results

    ERIC Educational Resources Information Center

    Powers, Sonya; Turhan, Ahmet; Binici, Salih

    2012-01-01

    The population sensitivity of vertical scaling results was evaluated for a state reading assessment spanning grades 3-10 and a state mathematics test spanning grades 3-8. Subpopulations considered included males and females. The 3-parameter logistic model was used to calibrate math and reading items and a common item design was used to construct…

  16. A Test-Length Correction to the Estimation of Extreme Proficiency Levels

    ERIC Educational Resources Information Center

    Magis, David; Beland, Sebastien; Raiche, Gilles

    2011-01-01

    In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…

  17. The Information Function for the One-Parameter Logistic Model: Is it Reliability?

    ERIC Educational Resources Information Center

    Doran, Harold C.

    2005-01-01

    The information function is an important statistic in item response theory (IRT) applications. Although the information function is often described as the IRT version of reliability, it differs from the classical notion of reliability from a critical perspective: replication. This article first explores the information function for the…

  18. Characterization of Musa sp. fruits and plantain banana ripening stages according to their physicochemical attributes.

    PubMed

    Valérie Passo Tsamo, Claudine; Andre, Christelle M; Ritter, Christian; Tomekpe, Kodjo; Ngoh Newilah, Gérard; Rogez, Hervé; Larondelle, Yvan

    2014-08-27

    This study aimed at understanding the contribution of the fruit physicochemical parameters to Musa sp. diversity and plantain ripening stages. A discriminant analysis was first performed on a collection of 35 Musa sp. cultivars, organized in six groups based on the consumption mode (dessert or cooking banana) and the genomic constitution. A principal component analysis reinforced by a logistic regression on plantain cultivars was proposed as an analytical approach to describe the plantain ripening stages. The results of the discriminant analysis showed that edible fraction, peel pH, pulp water content, and pulp total phenolics were among the most contributing attributes for the discrimination of the cultivar groups. With mean values ranging from 65.4 to 247.3 mg of gallic acid equivalents/100 g of fresh weight, the pulp total phenolics strongly differed between interspecific and monospecific cultivars within dessert and nonplantain cooking bananas. The results of the logistic regression revealed that the best models according to fitting parameters involved more than one physicochemical attribute. Interestingly, pulp and peel total phenolic contents contributed in the building up of these models.

  19. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  20. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test ofmore » the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.« less

  1. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    NASA Astrophysics Data System (ADS)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd; Baharum, Adam

    2015-10-01

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test of the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.

  2. 4D-Fingerprint Categorical QSAR Models for Skin Sensitization Based on Classification Local Lymph Node Assay Measures

    PubMed Central

    Li, Yi; Tseng, Yufeng J.; Pan, Dahua; Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Hopfinger, Anton J.

    2008-01-01

    Currently, the only validated methods to identify skin sensitization effects are in vivo models, such as the Local Lymph Node Assay (LLNA) and guinea pig studies. There is a tremendous need, in particular due to novel legislation, to develop animal alternatives, eg. Quantitative Structure-Activity Relationship (QSAR) models. Here, QSAR models for skin sensitization using LLNA data have been constructed. The descriptors used to generate these models are derived from the 4D-molecular similarity paradigm and are referred to as universal 4D-fingerprints. A training set of 132 structurally diverse compounds and a test set of 15 structurally diverse compounds were used in this study. The statistical methodologies used to build the models are logistic regression (LR), and partial least square coupled logistic regression (PLS-LR), which prove to be effective tools for studying skin sensitization measures expressed in the two categorical terms of sensitizer and non-sensitizer. QSAR models with low values of the Hosmer-Lemeshow goodness-of-fit statistic, χHL2, are significant and predictive. For the training set, the cross-validated prediction accuracy of the logistic regression models ranges from 77.3% to 78.0%, while that of PLS-logistic regression models ranges from 87.1% to 89.4%. For the test set, the prediction accuracy of logistic regression models ranges from 80.0%-86.7%, while that of PLS-logistic regression models ranges from 73.3%-80.0%. The QSAR models are made up of 4D-fingerprints related to aromatic atoms, hydrogen bond acceptors and negatively partially charged atoms. PMID:17226934

  3. Estimating age from recapture data: integrating incremental growth measures with ancillary data to infer age-at-length

    USGS Publications Warehouse

    Eaton, Mitchell J.; Link, William A.

    2011-01-01

    Estimating the age of individuals in wild populations can be of fundamental importance for answering ecological questions, modeling population demographics, and managing exploited or threatened species. Significant effort has been devoted to determining age through the use of growth annuli, secondary physical characteristics related to age, and growth models. Many species, however, either do not exhibit physical characteristics useful for independent age validation or are too rare to justify sacrificing a large number of individuals to establish the relationship between size and age. Length-at-age models are well represented in the fisheries and other wildlife management literature. Many of these models overlook variation in growth rates of individuals and consider growth parameters as population parameters. More recent models have taken advantage of hierarchical structuring of parameters and Bayesian inference methods to allow for variation among individuals as functions of environmental covariates or individual-specific random effects. Here, we describe hierarchical models in which growth curves vary as individual-specific stochastic processes, and we show how these models can be fit using capture–recapture data for animals of unknown age along with data for animals of known age. We combine these independent data sources in a Bayesian analysis, distinguishing natural variation (among and within individuals) from measurement error. We illustrate using data for African dwarf crocodiles, comparing von Bertalanffy and logistic growth models. The analysis provides the means of predicting crocodile age, given a single measurement of head length. The von Bertalanffy was much better supported than the logistic growth model and predicted that dwarf crocodiles grow from 19.4 cm total length at birth to 32.9 cm in the first year and 45.3 cm by the end of their second year. Based on the minimum size of females observed with hatchlings, reproductive maturity was estimated to be at nine years. These size benchmarks are believed to represent thresholds for important demographic parameters; improved estimates of age, therefore, will increase the precision of population projection models. The modeling approach that we present can be applied to other species and offers significant advantages when multiple sources of data are available and traditional aging techniques are not practical.

  4. Development of a real-time crash risk prediction model incorporating the various crash mechanisms across different traffic states.

    PubMed

    Xu, Chengcheng; Wang, Wei; Liu, Pan; Zhang, Fangwei

    2015-01-01

    This study aimed to identify the traffic flow variables contributing to crash risks under different traffic states and to develop a real-time crash risk model incorporating the varying crash mechanisms across different traffic states. The crash, traffic, and geometric data were collected on the I-880N freeway in California in 2008 and 2009. This study considered 4 different traffic states in Wu's 4-phase traffic theory. They are free fluid traffic, bunched fluid traffic, bunched congested traffic, and standing congested traffic. Several different statistical methods were used to accomplish the research objective. The preliminary analysis showed that traffic states significantly affected crash likelihood, collision type, and injury severity. Nonlinear canonical correlation analysis (NLCCA) was conducted to identify the underlying phenomena that made certain traffic states more hazardous than others. The results suggested that different traffic states were associated with various collision types and injury severities. The matching of traffic flow characteristics and crash characteristics in NLCCA revealed how traffic states affected traffic safety. The logistic regression analyses showed that the factors contributing to crash risks were quite different across various traffic states. To incorporate the varying crash mechanisms across different traffic states, random parameters logistic regression was used to develop a real-time crash risk model. Bayesian inference based on Markov chain Monte Carlo simulations was used for model estimation. The parameters of traffic flow variables in the model were allowed to vary across different traffic states. Compared with the standard logistic regression model, the proposed model significantly improved the goodness-of-fit and predictive performance. These results can promote a better understanding of the relationship between traffic flow characteristics and crash risks, which is valuable knowledge in the pursuit of improving traffic safety on freeways through the use of dynamic safety management systems.

  5. Familial aggregation and linkage analysis with covariates for metabolic syndrome risk factors.

    PubMed

    Naseri, Parisa; Khodakarim, Soheila; Guity, Kamran; Daneshpour, Maryam S

    2018-06-15

    Mechanisms of metabolic syndrome (MetS) causation are complex, genetic and environmental factors are important factors for the pathogenesis of MetS In this study, we aimed to evaluate familial and genetic influences on metabolic syndrome risk factor and also assess association between FTO (rs1558902 and rs7202116) and CETP(rs1864163) genes' single nucleotide polymorphisms (SNP) with low HDL_C in the Tehran Lipid and Glucose Study (TLGS). The design was a cross-sectional study of 1776 members of 227 randomly-ascertained families. Selected families contained at least one affected metabolic syndrome and at least two members of the family had suffered a loss of HDL_C according to ATP III criteria. In this study, after confirming the familial aggregation with intra-trait correlation coefficients (ICC) of Metabolic syndrome (MetS) and the quantitative lipid traits, the genetic linkage analysis of HDL_C was performed using conditional logistic method with adjusted sex and age. The results of the aggregation analysis revealed a higher correlation between siblings than between parent-offspring pairs representing the role of genetic factors in MetS. In addition, the conditional logistic model with covariates showed that the linkage results between HDL_C and three marker, rs1558902, rs7202116 and rs1864163 were significant. In summary, a high risk of MetS was found in siblings confirming the genetic influences of metabolic syndrome risk factor. Moreover, the power to detect linkage increases in the one parameter conditional logistic model regarding the use of age and sex as covariates. Copyright © 2018. Published by Elsevier B.V.

  6. Discrete post-processing of total cloud cover ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian

    2017-04-01

    This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.

  7. Application of a time-dependent coalescence process for inferring the history of population size changes from DNA sequence data.

    PubMed

    Polanski, A; Kimmel, M; Chakraborty, R

    1998-05-12

    Distribution of pairwise differences of nucleotides from data on a sample of DNA sequences from a given segment of the genome has been used in the past to draw inferences about the past history of population size changes. However, all earlier methods assume a given model of population size changes (such as sudden expansion), parameters of which (e.g., time and amplitude of expansion) are fitted to the observed distributions of nucleotide differences among pairwise comparisons of all DNA sequences in the sample. Our theory indicates that for any time-dependent population size, N(tau) (in which time tau is counted backward from present), a time-dependent coalescence process yields the distribution, p(tau), of the time of coalescence between two DNA sequences randomly drawn from the population. Prediction of p(tau) and N(tau) requires the use of a reverse Laplace transform known to be unstable. Nevertheless, simulated data obtained from three models of monotone population change (stepwise, exponential, and logistic) indicate that the pattern of a past population size change leaves its signature on the pattern of DNA polymorphism. Application of the theory to the published mtDNA sequences indicates that the current mtDNA sequence variation is not inconsistent with a logistic growth of the human population.

  8. Handling nonresponse in surveys: analytic corrections compared with converting nonresponders.

    PubMed

    Jenkins, Paul; Earle-Richardson, Giulia; Burdick, Patrick; May, John

    2008-02-01

    A large health survey was combined with a simulation study to contrast the reduction in bias achieved by double sampling versus two weighting methods based on propensity scores. The survey used a census of one New York county and double sampling in six others. Propensity scores were modeled as a logistic function of demographic variables and were used in conjunction with a random uniform variate to simulate response in the census. These data were used to estimate the prevalence of chronic disease in a population whose parameters were defined as values from the census. Significant (p < 0.0001) predictors in the logistic function included multiple (vs. single) occupancy (odds ratio (OR) = 1.3), bank card ownership (OR = 2.1), gender (OR = 1.5), home ownership (OR = 1.3), head of household's age (OR = 1.4), and income >$18,000 (OR = 0.8). The model likelihood ratio chi-square was significant (p < 0.0001), with the area under the receiver operating characteristic curve = 0.59. Double-sampling estimates were marginally closer to population values than those from either weighting method. However, the variance was also greater (p < 0.01). The reduction in bias for point estimation from double sampling may be more than offset by the increased variance associated with this method.

  9. Large Unbalanced Credit Scoring Using Lasso-Logistic Regression Ensemble

    PubMed Central

    Wang, Hong; Xu, Qingsong; Zhou, Lifeng

    2015-01-01

    Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data. PMID:25706988

  10. The combination of ovarian volume and outline has better diagnostic accuracy than prostate-specific antigen (PSA) concentrations in women with polycystic ovarian syndrome (PCOs).

    PubMed

    Bili, Eleni; Bili, Authors Eleni; Dampala, Kaliopi; Iakovou, Ioannis; Tsolakidis, Dimitrios; Giannakou, Anastasia; Tarlatzis, Basil C

    2014-08-01

    The aim of this study was to determine the performance of prostate specific antigen (PSA) and ultrasound parameters, such as ovarian volume and outline, in the diagnosis of polycystic ovary syndrome (PCOS). This prospective, observational, case-controlled study included 43 women with PCOS, and 40 controls. Between day 3 and 5 of the menstrual cycle, fasting serum samples were collected and transvaginal ultrasound was performed. The diagnostic performance of each parameter [total PSA (tPSA), total-to-free PSA ratio (tPSA:fPSA), ovarian volume, ovarian outline] was estimated by means of receiver operating characteristic (ROC) analysis, along with area under the curve (AUC), threshold, sensitivity, specificity as well as positive (+) and negative (-) likelihood ratios (LRs). Multivariate logistical regression models, using ovarian volume and ovarian outline, were constructed. The tPSA and tPSA:fPSA ratio resulted in AUC of 0.74 and 0.70, respectively, with moderate specificity/sensitivity and insufficient LR+/- values. In the multivariate logistic regression model, the combination of ovarian volume and outline had a sensitivity of 97.7% and a specificity of 97.5% in the diagnosis of PCOS, with +LR and -LR values of 39.1 and 0.02, respectively. In women with PCOS, tPSA and tPSA:fPSA ratio have similar diagnostic performance. The use of a multivariate logistic regression model, incorporating ovarian volume and outline, offers very good diagnostic accuracy in distinguishing women with PCOS patients from controls. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Predictors of course in obsessive-compulsive disorder: logistic regression versus Cox regression for recurrent events.

    PubMed

    Kempe, P T; van Oppen, P; de Haan, E; Twisk, J W R; Sluis, A; Smit, J H; van Dyck, R; van Balkom, A J L M

    2007-09-01

    Two methods for predicting remissions in obsessive-compulsive disorder (OCD) treatment are evaluated. Y-BOCS measurements of 88 patients with a primary OCD (DSM-III-R) diagnosis were performed over a 16-week treatment period, and during three follow-ups. Remission at any measurement was defined as a Y-BOCS score lower than thirteen combined with a reduction of seven points when compared with baseline. Logistic regression models were compared with a Cox regression for recurrent events model. Logistic regression yielded different models at different evaluation times. The recurrent events model remained stable when fewer measurements were used. Higher baseline levels of neuroticism and more severe OCD symptoms were associated with a lower chance of remission, early age of onset and more depressive symptoms with a higher chance. Choice of outcome time affects logistic regression prediction models. Recurrent events analysis uses all information on remissions and relapses. Short- and long-term predictors for OCD remission show overlap.

  12. Research challenges in municipal solid waste logistics management.

    PubMed

    Bing, Xiaoyun; Bloemhof, Jacqueline M; Ramos, Tania Rodrigues Pereira; Barbosa-Povoa, Ana Paula; Wong, Chee Yew; van der Vorst, Jack G A J

    2016-02-01

    During the last two decades, EU legislation has put increasing pressure on member countries to achieve specified recycling targets for municipal household waste. These targets can be obtained in various ways choosing collection methods, separation methods, decentral or central logistic systems, etc. This paper compares municipal solid waste (MSW) management practices in various EU countries to identify the characteristics and key issues from a waste management and reverse logistics point of view. Further, we investigate literature on modelling municipal solid waste logistics in general. Comparing issues addressed in literature with the identified issues in practice result in a research agenda for modelling municipal solid waste logistics in Europe. We conclude that waste recycling is a multi-disciplinary problem that needs to be considered at different decision levels simultaneously. A holistic view and taking into account the characteristics of different waste types are necessary when modelling a reverse supply chain for MSW recycling. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Design of transportation and distribution Oil Palm Trunk of (OPT) in Indonesia

    NASA Astrophysics Data System (ADS)

    Norita, Defi; Arkeman, Yandra

    2018-03-01

    This research initiated from the area of oil palm plantations in Indonesia 13 million hectares, triggering consternation of abundance of oil palm trunk when garden regeneration is done. If 4 percent of the area is rehabilitated every year, almost 100 million cubic feet of oil palm will be trash. Biomass in the form of pellets can be processed from oil palm trunk. It is then disseminated back to the palm oil processing area into biomass. The amount of transportation cost of the used ships and trucks was defined as parameters. So the objective function determined the type and number of ship and truck trips that provide the minimum transportation cost. To optimize logistics transportation network in regional port cluster, combining hub-and-spoke transportation system among regional port with consolidation and dispersing transportation systems between ports and their own hinterlands, a nonlinear optimization model for two-stage logistics system in regional port cluster was introduced to simultaneously determine the following factors: the hinterlands serviced by individual ports and transportation capacity operated between each port and its hinterland, cargo transportation volume and corresponding transportation capacity allocated via a hub port from an original port to a destination port, cargo transportation volume and corresponding transportation capacity allocated directly from an original port to a destination port. Finally, a numerical example is given to demonstrate the application of the proposed model. It can be shown that the solution to the proposed non-linear model can be obtained by transforming it into linear programming models.

  14. Using ROC curves to compare neural networks and logistic regression for modeling individual noncatastrophic tree mortality

    Treesearch

    Susan L. King

    2003-01-01

    The performance of two classifiers, logistic regression and neural networks, are compared for modeling noncatastrophic individual tree mortality for 21 species of trees in West Virginia. The output of the classifier is usually a continuous number between 0 and 1. A threshold is selected between 0 and 1 and all of the trees below the threshold are classified as...

  15. Sensor-based fall risk assessment--an expert 'to go'.

    PubMed

    Marschollek, M; Rehwald, A; Wolf, K H; Gietzelt, M; Nemitz, G; Meyer Zu Schwabedissen, H; Haux, R

    2011-01-01

    Falls are a predominant problem in our aging society, often leading to severe somatic and psychological consequences, and having an incidence of about 30% in the group of persons aged 65 years or above. In order to identify persons at risk, many assessment tools and tests have been developed, but most of these have to be conducted in a supervised setting and are dependent on an expert rater. The overall aim of our research work is to develop an objective and unobtrusive method to determine individual fall risk based on the use of motion sensor data. The aims of our work for this paper are to derive a fall risk model based on sensor data that may potentially be measured during typical activities of daily life (aim #1), and to evaluate the resulting model with data from a one-year follow-up study (aim #2). A sample of n = 119 geriatric inpatients wore an accelerometer on the waist during a Timed 'Up & Go' test and a 20 m walk. Fifty patients were included in a one-year follow-up study, assessing fall events and scoring average physical activity at home in telephone interviews. The sensor data were processed to extract gait and dynamic balance parameters, from which four fall risk models--two classification trees and two logistic regression models--were computed: models CT#1 and SL#1 using accelerometer data only, models CT#2 and SL#2 including the physical activity score. The risk models were evaluated in a ten-times tenfold cross-validation procedure, calculating sensitivity (SENS), specificity (SPEC), positive and negative predictive values (PPV, NPV), classification accuracy, area under the curve (AUC) and the Brier score. Both classification trees show a fair to good performance (models CT#1/CT#2): SENS 74%/58%, SPEC 96%/82%, PPV 92%/ 74%, NPV 77%/82%, accuracy 80%/78%, AUC 0.83/0.87 and Brier scores 0.14/0.14. The logistic regression models (SL#1/SL#2) perform worse: SENS 42%/58%, SPEC 82%/ 78%, PPV 62%/65%, NPV 67%/72%, accuracy 65%/70%, AUC 0.65/0.72 and Brier scores 0.23/0.21. Our results suggest that accelerometer data may be used to predict falls in an unsupervised setting. Furthermore, the parameters used for prediction are measurable with an unobtrusive sensor device during normal activities of daily living. These promising results have to be validated in a larger, long-term prospective trial.

  16. FITPOP, a heuristic simulation model of population dynamics and genetics with special reference to fisheries

    USGS Publications Warehouse

    McKenna, James E.

    2000-01-01

    Although, perceiving genetic differences and their effects on fish population dynamics is difficult, simulation models offer a means to explore and illustrate these effects. I partitioned the intrinsic rate of increase parameter of a simple logistic-competition model into three components, allowing specification of effects of relative differences in fitness and mortality, as well as finite rate of increase. This model was placed into an interactive, stochastic environment to allow easy manipulation of model parameters (FITPOP). Simulation results illustrated the effects of subtle differences in genetic and population parameters on total population size, overall fitness, and sensitivity of the system to variability. Several consequences of mixing genetically distinct populations were illustrated. For example, behaviors such as depression of population size after initial introgression and extirpation of native stocks due to continuous stocking of genetically inferior fish were reproduced. It also was shown that carrying capacity relative to the amount of stocking had an important influence on population dynamics. Uncertainty associated with parameter estimates reduced confidence in model projections. The FITPOP model provided a simple tool to explore population dynamics, which may assist in formulating management strategies and identifying research needs.

  17. Space shuttle solid rocket booster cost-per-flight analysis technique

    NASA Technical Reports Server (NTRS)

    Forney, J. A.

    1979-01-01

    A cost per flight computer model is described which considers: traffic model, component attrition, hardware useful life, turnaround time for refurbishment, manufacturing rates, learning curves on the time to perform tasks, cost improvement curves on quantity hardware buys, inflation, spares philosophy, long lead, hardware funding requirements, and other logistics and scheduling constraints. Additional uses of the model include assessing the cost per flight impact of changing major space shuttle program parameters and searching for opportunities to make cost effective management decisions.

  18. Optical identification of subjects at high risk for developing breast cancer

    NASA Astrophysics Data System (ADS)

    Taroni, Paola; Quarto, Giovanna; Pifferi, Antonio; Ieva, Francesca; Paganoni, Anna Maria; Abbate, Francesca; Balestreri, Nicola; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2013-06-01

    A time-domain multiwavelength (635 to 1060 nm) optical mammography was performed on 147 subjects with recent x-ray mammograms available, and average breast tissue composition (water, lipid, collagen, oxy- and deoxyhemoglobin) and scattering parameters (amplitude a and slope b) were estimated. Correlation was observed between optically derived parameters and mammographic density [Breast Imaging and Reporting Data System (BI-RADS) categories], which is a strong risk factor for breast cancer. A regression logistic model was obtained to best identify high-risk (BI-RADS 4) subjects, based on collagen content and scattering parameters. The model presents a total misclassification error of 12.3%, sensitivity of 69%, specificity of 94%, and simple kappa of 0.84, which compares favorably even with intraradiologist assignments of BI-RADS categories.

  19. Logistics Enterprise Evaluation Model Based On Fuzzy Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Fu, Pei-hua; Yin, Hong-bo

    In this thesis, we introduced an evaluation model based on fuzzy cluster algorithm of logistics enterprises. First of all,we present the evaluation index system which contains basic information, management level, technical strength, transport capacity,informatization level, market competition and customer service. We decided the index weight according to the grades, and evaluated integrate ability of the logistics enterprises using fuzzy cluster analysis method. In this thesis, we introduced the system evaluation module and cluster analysis module in detail and described how we achieved these two modules. At last, we gave the result of the system.

  20. A Predictive Model for Readmissions Among Medicare Patients in a California Hospital.

    PubMed

    Duncan, Ian; Huynh, Nhan

    2017-11-17

    Predictive models for hospital readmission rates are in high demand because of the Centers for Medicare & Medicaid Services (CMS) Hospital Readmission Reduction Program (HRRP). The LACE index is one of the most popular predictive tools among hospitals in the United States. The LACE index is a simple tool with 4 parameters: Length of stay, Acuity of admission, Comorbidity, and Emergency visits in the previous 6 months. The authors applied logistic regression to develop a predictive model for a medium-sized not-for-profit community hospital in California using patient-level data with more specific patient information (including 13 explanatory variables). Specifically, the logistic regression is applied to 2 populations: a general population including all patients and the specific group of patients targeted by the CMS penalty (characterized as ages 65 or older with select conditions). The 2 resulting logistic regression models have a higher sensitivity rate compared to the sensitivity of the LACE index. The C statistic values of the model applied to both populations demonstrate moderate levels of predictive power. The authors also build an economic model to demonstrate the potential financial impact of the use of the model for targeting high-risk patients in a sample hospital and demonstrate that, on balance, whether the hospital gains or loses from reducing readmissions depends on its margin and the extent of its readmission penalties.

  1. Bifurcation and Fractal of the Coupled Logistic Map

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luo, Chao

    The nature of the fixed points of the coupled Logistic map is researched, and the boundary equation of the first bifurcation of the coupled Logistic map in the parameter space is given out. Using the quantitative criterion and rule of system chaos, i.e., phase graph, bifurcation graph, power spectra, the computation of the fractal dimension, and the Lyapunov exponent, the paper reveals the general characteristics of the coupled Logistic map transforming from regularity to chaos, the following conclusions are shown: (1) chaotic patterns of the coupled Logistic map may emerge out of double-periodic bifurcation and Hopf bifurcation, respectively; (2) during the process of double-period bifurcation, the system exhibits self-similarity and scale transform invariability in both the parameter space and the phase space. From the research of the attraction basin and Mandelbrot-Julia set of the coupled Logistic map, the following conclusions are indicated: (1) the boundary between periodic and quasiperiodic regions is fractal, and that indicates the impossibility to predict the moving result of the points in the phase plane; (2) the structures of the Mandelbrot-Julia sets are determined by the control parameters, and their boundaries have the fractal characteristic.

  2. Using phenomenological models for forecasting the 2015 Ebola challenge.

    PubMed

    Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo

    2018-03-01

    The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  4. An IPSO-SVM algorithm for security state prediction of mine production logistics system

    NASA Astrophysics Data System (ADS)

    Zhang, Yanliang; Lei, Junhui; Ma, Qiuli; Chen, Xin; Bi, Runfang

    2017-06-01

    A theoretical basis for the regulation of corporate security warning and resources was provided in order to reveal the laws behind the security state in mine production logistics. Considering complex mine production logistics system and the variable is difficult to acquire, a superior security status predicting model of mine production logistics system based on the improved particle swarm optimization and support vector machine (IPSO-SVM) is proposed in this paper. Firstly, through the linear adjustments of inertia weight and learning weights, the convergence speed and search accuracy are enhanced with the aim to deal with situations associated with the changeable complexity and the data acquisition difficulty. The improved particle swarm optimization (IPSO) is then introduced to resolve the problem of parameter settings in traditional support vector machines (SVM). At the same time, security status index system is built to determine the classification standards of safety status. The feasibility and effectiveness of this method is finally verified using the experimental results.

  5. A New Family of Models for the Multiple-Choice Item.

    DTIC Science & Technology

    1979-12-19

    analysis of the verbal scholastic aptitude test using Birnhaum’s three-parameter logistic model. Educational and Psychological Measurement, 28, 989-1020...16. [8] McBride, J. R. Some properties of a Bayesian adaptive ability testing strategy. Applied Psychological Measurement, 1, 121-140, 1977. [9...University of Michigan Ann Arbor, MI 48106 ’~KL -137- Non Govt Mon Govt 1 Dr. Earl Hunt 1 Dr. Frederick N. Lord Dept. of Psychology Educational Testing

  6. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics.

    PubMed

    Frisk, Mikael; Jonsson, Annie; Sellman, Stefan; Flisberg, Patrik; Rönnqvist, Mikael; Wennergren, Uno

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact.

  7. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics

    PubMed Central

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact. PMID:29513704

  8. Stability and Hopf bifurcation for a regulated logistic growth model with discrete and distributed delays

    NASA Astrophysics Data System (ADS)

    Fang, Shengle; Jiang, Minghui

    2009-12-01

    In this paper, we investigate the stability and Hopf bifurcation of a new regulated logistic growth with discrete and distributed delays. By choosing the discrete delay τ as a bifurcation parameter, we prove that the system is locally asymptotically stable in a range of the delay and Hopf bifurcation occurs as τ crosses a critical value. Furthermore, explicit algorithm for determining the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions is derived by normal form theorem and center manifold argument. Finally, an illustrative example is also given to support the theoretical results.

  9. Correlation between the Temperature Dependence of Intrsinsic Mr Parameters and Thermal Dose Measured by a Rapid Chemical Shift Imaging Technique

    PubMed Central

    Taylor, Brian A.; Elliott, Andrew M.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason

    2011-01-01

    In order to investigate simultaneous MR temperature imaging and direct validation of tissue damage during thermal therapy, temperature-dependent signal changes in proton resonance frequency (PRF) shifts, R2* values, and T1-weighted amplitudes are measured from one technique in ex vivo tissue heated with a 980-nm laser at 1.5T and 3.0T. Using a multi-gradient echo acquisition and signal modeling with the Stieglitz-McBride algorithm, the temperature sensitivity coefficient (TSC) values of these parameters are measured in each tissue at high spatiotemporal resolutions (1.6×1.6×4mm3,≤5sec) at the range of 25-61 °C. Non-linear changes in MR parameters are examined and correlated with an Arrhenius rate dose model of thermal damage. Using logistic regression, the probability of changes in these parameters is calculated as a function of thermal dose to determine if changes correspond to thermal damage. Temperature calibrations demonstrate TSC values which are consistent with previous studies. Temperature sensitivity of R2* and, in some cases, T1-weighted amplitudes are statistically different before and after thermal damage occurred. Significant changes in the slopes of R2* as a function of temperature are observed. Logistic regression analysis shows that these changes could be accurately predicted using the Arrhenius rate dose model (Ω=1.01±0.03), thereby showing that the changes in R2* could be direct markers of protein denaturation. Overall, by using a chemical shift imaging technique with simultaneous temperature estimation, R2* mapping and T1-W imaging, it is shown that changes in the sensitivity of R2* and, to a lesser degree, T1-W amplitudes are measured in ex vivo tissue when thermal damage is expected to occur according to Arrhenius rate dose models. These changes could possibly be used for direct validation of thermal damage in contrast to model-based predictions. PMID:21721063

  10. A predictive model for early mortality after surgical treatment of heart valve or prosthesis infective endocarditis. The EndoSCORE.

    PubMed

    Di Mauro, Michele; Dato, Guglielmo Mario Actis; Barili, Fabio; Gelsomino, Sandro; Santè, Pasquale; Corte, Alessandro Della; Carrozza, Antonio; Ratta, Ester Della; Cugola, Diego; Galletti, Lorenzo; Devotini, Roger; Casabona, Riccardo; Santini, Francesco; Salsano, Antonio; Scrofani, Roberto; Antona, Carlo; Botta, Luca; Russo, Claudio; Mancuso, Samuel; Rinaldi, Mauro; De Vincentiis, Carlo; Biondi, Andrea; Beghi, Cesare; Cappabianca, Giangiuseppe; Tarzia, Vincenzo; Gerosa, Gino; De Bonis, Michele; Pozzoli, Alberto; Nicolini, Francesco; Benassi, Filippo; Rosato, Francesco; Grasso, Elena; Livi, Ugolino; Sponga, Sandro; Pacini, Davide; Di Bartolomeo, Roberto; De Martino, Andrea; Bortolotti, Uberto; Onorati, Francesco; Faggian, Giuseppe; Lorusso, Roberto; Vizzardi, Enrico; Di Giammarco, Gabriele; Marinelli, Daniele; Villa, Emmanuel; Troise, Giovanni; Picichè, Marco; Musumeci, Francesco; Paparella, Domenico; Margari, Vito; Tritto, Francesco; Damiani, Girolamo; Scrascia, Giuseppe; Zaccaria, Salvatore; Renzulli, Attilio; Serraino, Giuseppe; Mariscalco, Giovanni; Maselli, Daniele; Foschi, Massimiliano; Parolari, Alessandro; Nappi, Giannantonio

    2017-08-15

    The aim of this large retrospective study was to provide a logistic risk model along an additive score to predict early mortality after surgical treatment of patients with heart valve or prosthesis infective endocarditis (IE). From 2000 to 2015, 2715 patients with native valve endocarditis (NVE) or prosthesis valve endocarditis (PVE) were operated on in 26 Italian Cardiac Surgery Centers. The relationship between early mortality and covariates was evaluated with logistic mixed effect models. Fixed effects are parameters associated with the entire population or with certain repeatable levels of experimental factors, while random effects are associated with individual experimental units (centers). Early mortality was 11.0% (298/2715); At mixed effect logistic regression the following variables were found associated with early mortality: age class, female gender, LVEF, preoperative shock, COPD, creatinine value above 2mg/dl, presence of abscess, number of treated valve/prosthesis (with respect to one treated valve/prosthesis) and the isolation of Staphylococcus aureus, Fungus spp., Pseudomonas Aeruginosa and other micro-organisms, while Streptococcus spp., Enterococcus spp. and other Staphylococci did not affect early mortality, as well as no micro-organisms isolation. LVEF was found linearly associated with outcomes while non-linear association between mortality and age was tested and the best model was found with a categorization into four classes (AUC=0.851). The following study provides a logistic risk model to predict early mortality in patients with heart valve or prosthesis infective endocarditis undergoing surgical treatment, called "The EndoSCORE". Copyright © 2017. Published by Elsevier B.V.

  11. Science of Test Research Consortium: Year Two Final Report

    DTIC Science & Technology

    2012-10-02

    July 2012. Analysis of an Intervention for Small Unmanned Aerial System ( SUAS ) Accidents, submitted to Quality Engineering, LQEN-2012-0056. Stone... Systems Engineering. Wolf, S. E., R. R. Hill, and J. J. Pignatiello. June 2012. Using Neural Networks and Logistic Regression to Model Small Unmanned ...Human Retina. 6. Wolf, S. E. March 2012. Modeling Small Unmanned Aerial System Mishaps using Logistic Regression and Artificial Neural Networks. 7

  12. Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.

    2013-09-01

    In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.

  13. Measuring organizational effectiveness in information and communication technology companies using item response theory.

    PubMed

    Trierweiller, Andréa Cristina; Peixe, Blênio César Severo; Tezza, Rafael; Pereira, Vera Lúcia Duarte do Valle; Pacheco, Waldemar; Bornia, Antonio Cezar; de Andrade, Dalton Francisco

    2012-01-01

    The aim of this paper is to measure the effectiveness of the organizations Information and Communication Technology (ICT) from the point of view of the manager, using Item Response Theory (IRT). There is a need to verify the effectiveness of these organizations which are normally associated to complex, dynamic, and competitive environments. In academic literature, there is disagreement surrounding the concept of organizational effectiveness and its measurement. A construct was elaborated based on dimensions of effectiveness towards the construction of the items of the questionnaire which submitted to specialists for evaluation. It demonstrated itself to be viable in measuring organizational effectiveness of ICT companies under the point of view of a manager through using Two-Parameter Logistic Model (2PLM) of the IRT. This modeling permits us to evaluate the quality and property of each item placed within a single scale: items and respondents, which is not possible when using other similar tools.

  14. A PLSPM-Based Test Statistic for Detecting Gene-Gene Co-Association in Genome-Wide Association Study with Case-Control Design

    PubMed Central

    Zhang, Xiaoshuai; Yang, Xiaowei; Yuan, Zhongshang; Liu, Yanxun; Li, Fangyu; Peng, Bin; Zhu, Dianwen; Zhao, Jinghua; Xue, Fuzhong

    2013-01-01

    For genome-wide association data analysis, two genes in any pathway, two SNPs in the two linked gene regions respectively or in the two linked exons respectively within one gene are often correlated with each other. We therefore proposed the concept of gene-gene co-association, which refers to the effects not only due to the traditional interaction under nearly independent condition but the correlation between two genes. Furthermore, we constructed a novel statistic for detecting gene-gene co-association based on Partial Least Squares Path Modeling (PLSPM). Through simulation, the relationship between traditional interaction and co-association was highlighted under three different types of co-association. Both simulation and real data analysis demonstrated that the proposed PLSPM-based statistic has better performance than single SNP-based logistic model, PCA-based logistic model, and other gene-based methods. PMID:23620809

  15. A PLSPM-based test statistic for detecting gene-gene co-association in genome-wide association study with case-control design.

    PubMed

    Zhang, Xiaoshuai; Yang, Xiaowei; Yuan, Zhongshang; Liu, Yanxun; Li, Fangyu; Peng, Bin; Zhu, Dianwen; Zhao, Jinghua; Xue, Fuzhong

    2013-01-01

    For genome-wide association data analysis, two genes in any pathway, two SNPs in the two linked gene regions respectively or in the two linked exons respectively within one gene are often correlated with each other. We therefore proposed the concept of gene-gene co-association, which refers to the effects not only due to the traditional interaction under nearly independent condition but the correlation between two genes. Furthermore, we constructed a novel statistic for detecting gene-gene co-association based on Partial Least Squares Path Modeling (PLSPM). Through simulation, the relationship between traditional interaction and co-association was highlighted under three different types of co-association. Both simulation and real data analysis demonstrated that the proposed PLSPM-based statistic has better performance than single SNP-based logistic model, PCA-based logistic model, and other gene-based methods.

  16. In silico synergism and antagonism of an anti-tumour system intervened by coupling immunotherapy and chemotherapy: a mathematical modelling approach.

    PubMed

    Hu, Wen-Yong; Zhong, Wei-Rong; Wang, Feng-Hua; Li, Li; Shao, Yuan-Zhi

    2012-02-01

    Based on the logistic growth law for a tumour derived from enzymatic dynamics, we address from a physical point of view the phenomena of synergism, additivity and antagonism in an avascular anti-tumour system regulated externally by dual coupling periodic interventions, and propose a theoretical model to simulate the combinational administration of chemotherapy and immunotherapy. The in silico results of our modelling approach reveal that the tumour population density of an anti-tumour system, which is subject to the combinational attack of chemotherapeutical as well as immune intervention, depends on four parameters as below: the therapy intensities D, the coupling intensity I, the coupling coherence R and the phase-shifts Φ between two combinational interventions. In relation to the intensity and nature (synergism, additivity and antagonism) of coupling as well as the phase-shift between two therapeutic interventions, the administration sequence of two periodic interventions makes a difference to the curative efficacy of an anti-tumour system. The isobologram established from our model maintains a considerable consistency with that of the well-established Loewe Additivity model (Tallarida, Pharmacology 319(1):1-7, 2006). Our study discloses the general dynamic feature of an anti-tumour system regulated by two periodic coupling interventions, and the results may serve as a supplement to previous models of drug administration in combination and provide a type of heuristic approach for preclinical pharmacokinetic investigation.

  17. Morphology parameters for intracranial aneurysm rupture risk assessment.

    PubMed

    Dhar, Sujan; Tremmel, Markus; Mocco, J; Kim, Minsuok; Yamamoto, Junichi; Siddiqui, Adnan H; Hopkins, L Nelson; Meng, Hui

    2008-08-01

    The aim of this study is to identify image-based morphological parameters that correlate with human intracranial aneurysm (IA) rupture. For 45 patients with terminal or sidewall saccular IAs (25 unruptured, 20 ruptured), three-dimensional geometries were evaluated for a range of morphological parameters. In addition to five previously studied parameters (aspect ratio, aneurysm size, ellipticity index, nonsphericity index, and undulation index), we defined three novel parameters incorporating the parent vessel geometry (vessel angle, aneurysm [inclination] angle, and [aneurysm-to-vessel] size ratio) and explored their correlation with aneurysm rupture. Parameters were analyzed with a two-tailed independent Student's t test for significance; significant parameters (P < 0.05) were further examined by multivariate logistic regression analysis. Additionally, receiver operating characteristic analyses were performed on each parameter. Statistically significant differences were found between mean values in ruptured and unruptured groups for size ratio, undulation index, nonsphericity index, ellipticity index, aneurysm angle, and aspect ratio. Logistic regression analysis further revealed that size ratio (odds ratio, 1.41; 95% confidence interval, 1.03-1.92) and undulation index (odds ratio, 1.51; 95% confidence interval, 1.08-2.11) had the strongest independent correlation with ruptured IA. From the receiver operating characteristic analysis, size ratio and aneurysm angle had the highest area under the curve values of 0.83 and 0.85, respectively. Size ratio and aneurysm angle are promising new morphological metrics for IA rupture risk assessment. Because these parameters account for vessel geometry, they may bridge the gap between morphological studies and more qualitative location-based studies.

  18. Alcohol consumption and all-cause mortality.

    PubMed

    Duffy, J C

    1995-02-01

    Prospective studies of alcohol and mortality in middle-aged men almost universally find a U-shaped relationship between alcohol consumption and risk of mortality. This review demonstrates the extent to which different studies lead to different risk estimates, analyses the putative influence of abstention as a risk factor and uses available data to produce point and interval estimates of the consumption level apparently associated with minimum risk from two studies in the UK. Data from a number of studies are analysed by means of logistic-linear modelling, taking account of the possible influence of abstention as a special risk factor. Separate analysis of British data is performed. Logistic-linear modelling demonstrates large and highly significant differences between the studies considered in the relationship between alcohol consumption and all-cause mortality. The results support the identification of abstention as a special risk factor for mortality, but do not indicate that this alone explains the apparent U-shaped relationship. Separate analysis of two British studies indicates minimum risk of mortality in this population at a consumption level of about 26 (8.5 g) units of alcohol per week. The analysis supports the view that abstention may be a specific risk factor for all-cause mortality, but is not an adequate explanation of the apparent protective effect of alcohol consumption against all-cause mortality. Future analyses might better be performed on a case-by-case basis, using a change-point model to estimate the parameters of the relationship. The current misinterpretation of the sensible drinking level of 21 units per week for men in the UK as a limit is not justified, and the data suggest that alcohol consumption is a net preventive factor against premature death in this population.

  19. From organized internal traffic to collective navigation of bacterial swarms

    NASA Astrophysics Data System (ADS)

    Ariel, Gil; Shklarsh, Adi; Kalisman, Oren; Ingham, Colin; Ben-Jacob, Eshel

    2013-12-01

    Bacterial swarming resulting in collective navigation over surfaces provides a valuable example of cooperative colonization of new territories. The social bacterium Paenibacillus vortex exhibits successful and diverse swarming strategies. When grown on hard agar surfaces with peptone, P. vortex develops complex colonies of vortices (rotating bacterial aggregates). In contrast, during growth on Mueller-Hinton broth gelled into a soft agar surface, a new strategy of multi-level organization is revealed: the colonies are organized into a special network of swarms (or ‘snakes’ of a fraction of millimeter in width) with intricate internal traffic. More specifically, cell movement is organized in two or three lanes of bacteria traveling between the back and the front of the swarm. This special form of cellular logistics suggests new methods in which bacteria can share resources and risk while searching for food or migrating into new territories. While the vortices-based organization on hard agar surfaces has been modeled before, here, we introduce a new multi-agent bacterial swarming model devised to capture the swarms-based organization on soft surfaces. We test two putative generic mechanisms that may underlie the observed swarming logistics: (i) chemo-activated taxis in response to chemical cues and (ii) special align-and-push interactions between the bacteria and the boundary of the layer of lubricant collectively generated by the swarming bacteria. Using realistic parameters, the model captures the observed phenomena with semi-quantitative agreement in terms of the velocity as well as the dynamics of the swarm and its envelope. This agreement implies that the bacteria interactions with the swarm boundary play a crucial role in mediating the interplay between the collective movement of the swarm and the internal traffic dynamics.

  20. Visualization of logistic algorithm in Wilson model

    NASA Astrophysics Data System (ADS)

    Glushchenko, A. S.; Rodin, V. A.; Sinegubov, S. V.

    2018-05-01

    Economic order quantity (EOQ), defined by the Wilson's model, is widely used at different stages of production and distribution of different products. It is useful for making decisions in the management of inventories, providing a more efficient business operation and thus bringing more economic benefits. There is a large amount of reference material and extensive computer shells that help solving various logistics problems. However, the use of large computer environments is not always justified and requires special user training. A tense supply schedule in a logistics model is optimal, if, and only if, the planning horizon coincides with the beginning of the next possible delivery. For all other possible planning horizons, this plan is not optimal. It is significant that when the planning horizon changes, the plan changes immediately throughout the entire supply chain. In this paper, an algorithm and a program for visualizing models of the optimal value of supplies and their number, depending on the magnitude of the planned horizon, have been obtained. The program allows one to trace (visually and quickly) all main parameters of the optimal plan on the charts. The results of the paper represent a part of the authors’ research work in the field of optimization of protection and support services of ports in the Russian North.

  1. Computational fluid dynamics (CFD) using porous media modeling predicts recurrence after coiling of cerebral aneurysms

    PubMed Central

    Ishida, Fujimaro; Tsuji, Masanori; Furukawa, Kazuhiro; Shiba, Masato; Yasuda, Ryuta; Toma, Naoki; Sakaida, Hiroshi; Suzuki, Hidenori

    2017-01-01

    Objective This study aimed to predict recurrence after coil embolization of unruptured cerebral aneurysms with computational fluid dynamics (CFD) using porous media modeling (porous media CFD). Method A total of 37 unruptured cerebral aneurysms treated with coiling were analyzed using follow-up angiograms, simulated CFD prior to coiling (control CFD), and porous media CFD. Coiled aneurysms were classified into stable or recurrence groups according to follow-up angiogram findings. Morphological parameters, coil packing density, and hemodynamic variables were evaluated for their correlations with aneurysmal recurrence. We also calculated residual flow volumes (RFVs), a novel hemodynamic parameter used to quantify the residual aneurysm volume after simulated coiling, which has a mean fluid domain > 1.0 cm/s. Result Follow-up angiograms showed 24 aneurysms in the stable group and 13 in the recurrence group. Mann-Whitney U test demonstrated that maximum size, dome volume, neck width, neck area, and coil packing density were significantly different between the two groups (P < 0.05). Among the hemodynamic parameters, aneurysms in the recurrence group had significantly larger inflow and outflow areas in the control CFD and larger RFVs in the porous media CFD. Multivariate logistic regression analyses demonstrated that RFV was the only independently significant factor (odds ratio, 1.06; 95% confidence interval, 1.01–1.11; P = 0.016). Conclusion The study findings suggest that RFV collected under porous media modeling predicts the recurrence of coiled aneurysms. PMID:29284057

  2. Calculating Lyapunov Exponents: Applying Products and Evaluating Integrals

    ERIC Educational Resources Information Center

    McCartney, Mark

    2010-01-01

    Two common examples of one-dimensional maps (the tent map and the logistic map) are generalized to cases where they have more than one control parameter. In the case of the tent map, this still allows the global Lyapunov exponent to be found analytically, and permits various properties of the resulting global Lyapunov exponents to be investigated…

  3. A mixed-effects regression model for longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C; Hedeker, Donald

    2006-03-01

    A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.

  4. Periodontal disease in Chinese patients with systemic lupus erythematosus.

    PubMed

    Zhang, Qiuxiang; Zhang, Xiaoli; Feng, Guijaun; Fu, Ting; Yin, Rulan; Zhang, Lijuan; Feng, Xingmei; Li, Liren; Gu, Zhifeng

    2017-08-01

    Disease of systemic lupus erythematosus (SLE) and periodontal disease (PD) shares the common multiple characteristics. The aims of the present study were to evaluate the prevalence and severity of periodontal disease in Chinese SLE patients and to determine the association between SLE features and periodontal parameters. A cross-sectional study of 108 SLE patients together with 108 age- and sex-matched healthy controls was made. Periodontal status was conducted by two dentists independently. Sociodemographic characteristics, lifestyle factors, medication use, and clinical parameters were also assessed. The periodontal status was significantly worse in SLE patients compared to controls. In univariate logistic regression, SLE had a significant 2.78-fold [95% confidence interval (CI) 1.60-4.82] increase in odds of periodontitis compared to healthy controls. Adjusted for potential risk factors, patients with SLE had 13.98-fold (95% CI 5.10-38.33) increased odds against controls. In multiple linear regression model, the independent variable negatively and significantly associated with gingival index was education (P = 0.005); conversely, disease activity (P < 0.001) and plaque index (P = 0.002) were positively associated; Age was the only variable independently associated with periodontitis of SLE in multivariate logistic regression (OR 1.348; 95% CI: 1.183-1.536, P < 0.001). Chinese SLE patients were likely to suffer from higher odds of PD. These findings confirmed the importance of early interventions in combination with medical therapy. It is necessary for a close collaboration between dentists and clinicians when treating those patients.

  5. Identifying the optimal spatially and temporally invariant root distribution for a semiarid environment

    NASA Astrophysics Data System (ADS)

    Sivandran, Gajan; Bras, Rafael L.

    2012-12-01

    In semiarid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Vegetation roots have strong control over this partitioning, and assuming a static root profile, predetermine the manner in which this partitioning is undertaken.A coupled, dynamic vegetation and hydrologic model, tRIBS + VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point-scale simulations were carried out using two spatially and temporally invariant rooting schemes: uniform: a one-parameter model and logistic: a two-parameter model. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semiarid Walnut Gulch Experimental Watershed (WGEW) in Arizona. A series of simulations were undertaken exploring the parameter space of both rooting schemes and the optimal root distribution for the simulation, which was defined as the root distribution with the maximum mean transpiration over a 100-yr period, and this was identified. This optimal root profile was determined for five generic soil textures and two plant-functional types (PFTs) to illustrate the role of soil texture on the partitioning of moisture at the land surface. The simulation results illustrate the strong control soil texture has on the partitioning of rainfall and consequently the depth of the optimal rooting profile. High-conductivity soils resulted in the deepest optimal rooting profile with land surface moisture fluxes dominated by transpiration. As we move toward the lower conductivity end of the soil spectrum, a shallowing of the optimal rooting profile is observed and evaporation gradually becomes the dominate flux from the land surface. This study offers a methodology through which local plant, soil, and climate can be accounted for in the parameterization of rooting profiles in semiarid regions.

  6. An empirical study of statistical properties of variance partition coefficients for multi-level logistic regression models

    USGS Publications Warehouse

    Li, Ji; Gray, B.R.; Bates, D.M.

    2008-01-01

    Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.

  7. A coupled hidden Markov model for disease interactions

    PubMed Central

    Sherlock, Chris; Xifara, Tatiana; Telfer, Sandra; Begon, Mike

    2013-01-01

    To investigate interactions between parasite species in a host, a population of field voles was studied longitudinally, with presence or absence of six different parasites measured repeatedly. Although trapping sessions were regular, a different set of voles was caught at each session, leading to incomplete profiles for all subjects. We use a discrete time hidden Markov model for each disease with transition probabilities dependent on covariates via a set of logistic regressions. For each disease the hidden states for each of the other diseases at a given time point form part of the covariate set for the Markov transition probabilities from that time point. This allows us to gauge the influence of each parasite species on the transition probabilities for each of the other parasite species. Inference is performed via a Gibbs sampler, which cycles through each of the diseases, first using an adaptive Metropolis–Hastings step to sample from the conditional posterior of the covariate parameters for that particular disease given the hidden states for all other diseases and then sampling from the hidden states for that disease given the parameters. We find evidence for interactions between several pairs of parasites and of an acquired immune response for two of the parasites. PMID:24223436

  8. Optimizing landslide susceptibility zonation: Effects of DEM spatial resolution and slope unit delineation on logistic regression models

    NASA Astrophysics Data System (ADS)

    Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.

    2018-01-01

    We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.

  9. Immediate list recall as a measure of short-term episodic memory: insights from the serial position effect and item response theory.

    PubMed

    Gavett, Brandon E; Horwitz, Julie E

    2012-03-01

    The serial position effect shows that two interrelated cognitive processes underlie immediate recall of a supraspan word list. The current study used item response theory (IRT) methods to determine whether the serial position effect poses a threat to the construct validity of immediate list recall as a measure of verbal episodic memory. Archival data were obtained from a national sample of 4,212 volunteers aged 28-84 in the Midlife Development in the United States study. Telephone assessment yielded item-level data for a single immediate recall trial of the Rey Auditory Verbal Learning Test (RAVLT). Two parameter logistic IRT procedures were used to estimate item parameters and the Q(1) statistic was used to evaluate item fit. A two-dimensional model better fit the data than a unidimensional model, supporting the notion that list recall is influenced by two underlying cognitive processes. IRT analyses revealed that 4 of the 15 RAVLT items (1, 12, 14, and 15) were misfit (p < .05). Item characteristic curves for items 14 and 15 decreased monotonically, implying an inverse relationship between the ability level and the probability of recall. Elimination of the four misfit items provided better fit to the data and met necessary IRT assumptions. Performance on a supraspan list learning test is influenced by multiple cognitive abilities; failure to account for the serial position of words decreases the construct validity of the test as a measure of episodic memory and may provide misleading results. IRT methods can ameliorate these problems and improve construct validity.

  10. The SF-8 Spanish Version for Health-Related Quality of Life Assessment: Psychometric Study with IRT and CFA Models.

    PubMed

    Tomás, José M; Galiana, Laura; Fernández, Irene

    2018-03-22

    The aim of current research is to analyze the psychometric properties of the Spanish version of the SF-8, overcoming previous shortcomings. A double line of analyses was used: competitive structural equations models to establish factorial validity, and Item Response theory to analyze item psychometric characteristics and information. 593 people aged 60 years or older, attending long life learning programs at the University were surveyed. Their age ranged from 60 to 92 years old. 67.6% were women. The survey included scales on personality dimensions, attitudes, perceptions, and behaviors related to aging. Competitive confirmatory models pointed out two-factors (physical and mental health) as the best representation of the data: χ2(13) = 72.37 (p < .01); CFI = .99; TLI = .98; RMSEA = .08 (.06, .10). Item 5 was removed because of unreliability and cross-loading. Graded response models showed appropriate fit for two-parameter logistic model both the physical and the mental dimensions. Item Information Curves and Test Information Functions pointed out that the SF-8 was more informative for low levels of health. The Spanish SF-8 has adequate psychometric properties, being better represented by two dimensions, once Item 5 is removed. Gathering evidence on patient-reported outcome measures is of crucial importance, as this type of measurement instruments are increasingly used in clinical arena.

  11. Association of STAT3 Common Variations with Obesity and Hypertriglyceridemia: Protective and Contributive Effects

    PubMed Central

    Ma, Zuliang; Wang, Guanghai; Chen, Xuejiao; Ou, Zejin; Zou, Fei

    2014-01-01

    Signal transducer and activator of transcription 3 (STAT3) plays an important role in energy metabolism. Here we explore whether STAT3 common variations influence risks of obesity and other metabolic disorders in a Chinese Han population. Two tagging single nucleotide polymorphisms (tagSNPs), rs1053005 and rs957970, were used to capture the common variations of STAT3. Relationships between genotypes and obesity, body mass index, plasma triglyceride and other metabolic diseases related parameters were analyzed for association study in 1742 subjects. Generalized linear model and logistic regression model were used for quantitative data analysis and case-control study, respectively. rs1053005 was significantly associated with body mass index and waist circumference (p = 0.013 and p = 0.02, respectively). rs957970 was significantly associated with plasma level of triglyceride (p = 0.007). GG genotype at rs1053005 had lower risks of both general obesity and central obesity (OR = 0.40, p = 0.034; OR = 0.42, p = 0.007, respectively) compared with AA genotype. CT genotype at rs957970 had a higher risk of hypertriglyceridemia (OR = 1.43, p = 0.015) compared with TT genotype. Neither of the two SNPs was associated with othermetabolic diseases related parameters. Our observations indicated that common variations of STAT3 could significantly affect the risk of obesity and hypertriglyceridemia in Chinese Han population. PMID:25014397

  12. Measuring Constructs in Family Science: How Can Item Response Theory Improve Precision and Validity?

    PubMed Central

    Gordon, Rachel A.

    2014-01-01

    This article provides family scientists with an understanding of contemporary measurement perspectives and the ways in which item response theory (IRT) can be used to develop measures with desired evidence of precision and validity for research uses. The article offers a nontechnical introduction to some key features of IRT, including its orientation toward locating items along an underlying dimension and toward estimating precision of measurement for persons with different levels of that same construct. It also offers a didactic example of how the approach can be used to refine conceptualization and operationalization of constructs in the family sciences, using data from the National Longitudinal Survey of Youth 1979 (n = 2,732). Three basic models are considered: (a) the Rasch and (b) two-parameter logistic models for dichotomous items and (c) the Rating Scale Model for multicategory items. Throughout, the author highlights the potential for researchers to elevate measurement to a level on par with theorizing and testing about relationships among constructs. PMID:25663714

  13. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  14. Development of a program to fit data to a new logistic model for microbial growth.

    PubMed

    Fujikawa, Hiroshi; Kano, Yoshihiro

    2009-06-01

    Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.

  15. Indole 3-acetic acid, indoxyl sulfate and paracresyl-sulfate do not influence anemia parameters in hemodialysis patients.

    PubMed

    Bataille, Stanislas; Pelletier, Marion; Sallée, Marion; Berland, Yvon; McKay, Nathalie; Duval, Ariane; Gentile, Stéphanie; Mouelhi, Yosra; Brunet, Philippe; Burtey, Stéphane

    2017-07-26

    The main reason for anemia in renal failure patients is the insufficient erythropoietin production by the kidneys. Beside erythropoietin deficiency, in vitro studies have incriminated uremic toxins in the pathophysiology of anemia but clinical data are sparse. In order to assess if indole 3-acetic acid (IAA), indoxyl sulfate (IS), and paracresyl sulfate (PCS) -three protein bound uremic toxins- are clinically implicated in end-stage renal disease anemia we studied the correlation between IAA, IS and PCS plasmatic concentrations with hemoglobin and Erythropoietin Stimulating Agents (ESA) use in hemodialysis patients. Between June and July 2014, we conducted an observational cross sectional study in two hemodialysis center. Three statistical approaches were conducted. First, we compared patients treated with ESA and those not treated. Second, we performed linear regression models between IAA, IS, and PCS plasma concentrations and hemoglobin, the ESA dose over hemoglobin ratio (ESA/Hemoglobin) or the ESA resistance index (ERI). Third, we used a polytomous logistic regression model to compare groups of patients with no/low/high ESA dose and low/high hemoglobin statuses. Overall, 240 patients were included in the study. Mean age ± SD was 67.6 ± 16.0 years, 55.4% were men and 42.5% had diabetes mellitus. When compared with ESA treated patients, patients with no ESA had higher hemoglobin (mean 11.4 ± 1.1 versus 10.6 ± 1.2 g/dL; p <0.001), higher transferrin saturation (TSAT, 31.1 ± 16.3% versus 23.1 ± 11.5%; p < 0.001), less frequently an IV iron prescription (52.1 versus 65.7%, p = 0.04) and were more frequently treated with hemodiafiltration (53.5 versus 36.7%). In univariate analysis, IAA, IS or PCS plasma concentrations did not differ between the two groups. In the linear model, IAA plasma concentration was not associated with hemoglobin, but was negatively associated with ESA/Hb (p = 0.02; R = 0.18) and with the ERI (p = 0.03; R = 0.17). IS was associated with none of the three anemia parameters. PCS was positively associated with hemoglobin (p = 0.03; R = 0.14), but negatively with ESA/Hb (p = 0.03; R = 0.17) and the ERI (p = 0.02; R = 0.19). In multivariate analysis, the association of IAA concentration with ESA/Hb or ERI was not statistically significant, neither was the association of PCS with ESA/Hb or ERI. Identically, in the subgroup of 76 patients with no inflammation (CRP <5 mg/L) and no iron deficiency (TSAT >20%) linear regression between IAA, IS or PCS and any anemia parameter did not reach significance. In the third model, univariate analysis showed no intergroup significant differences for IAA and IS. Regarding PCS, the Low Hb/High ESA group had lower concentrations. However, when we compared PCS with the other significant characteristics of the five groups to the Low Hb/high ESA (our reference group), the polytomous logistic regression model didn't show any significant difference for PCS. In our study, using three different statistical models, we were unable to show any correlation between IAA, IS and PCS plasmatic concentrations and any anemia parameter in hemodialysis patients. Indolic uremic toxins and PCS have no or a very low effect on anemia parameters.

  16. Modelling the growth kinetics of Kocuria marina DAGII as a function of single and binary substrate during batch production of β-Cryptoxanthin.

    PubMed

    Mitra, Ruchira; Chaudhuri, Surabhi; Dutta, Debjani

    2017-01-01

    In the present investigation, growth kinetics of Kocuria marina DAGII during batch production of β-Cryptoxanthin (β-CRX) was studied by considering the effect of glucose and maltose as a single and binary substrate. The importance of mixed substrate over single substrate has been emphasised in the present study. Different mathematical models namely, the Logistic model for cell growth, the Logistic mass balance equation for substrate consumption and the Luedeking-Piret model for β-CRX production were successfully implemented. Model-based analyses for the single substrate experiments suggested that the concentrations of glucose and maltose higher than 7.5 and 10.0 g/L, respectively, inhibited the growth and β-CRX production by K. marina DAGII. The Han and Levenspiel model and the Luong product inhibition model accurately described the cell growth in glucose and maltose substrate systems with a R 2 value of 0.9989 and 0.9998, respectively. The effect of glucose and maltose as binary substrate was further investigated. The binary substrate kinetics was well described using the sum-kinetics with interaction parameters model. The results of production kinetics revealed that the presence of binary substrate in the cultivation medium increased the biomass and β-CRX yield significantly. This study is a first time detailed investigation on kinetic behaviours of K. marina DAGII during β-CRX production. The parameters obtained in the study might be helpful for developing strategies for commercial production of β-CRX by K. marina DAGII.

  17. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    PubMed

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  18. [Dehydration and malnutrition as two independent risk factors of death in a Senegalese pediatric hospital].

    PubMed

    Sylla, A; Guéye, M; Keita, Y; Seck, N; Seck, A; Mbow, F; Ndiaye, O; Diouf, S; Sall, M G

    2015-03-01

    Inpatient mortality is an indicator of the quality of care. We analyzed the mortality of under 5-year-old hospitalized children in the pediatric ward of Aristide Le Dantec Hospital for updating our data 10 years after our first study. We analyzed the data of the children hospitalized between 1 January and 31 December 2012. For each child, we collected anthropometric measurements converted to a z-score related to World Health Organization growth data. Logistic regression-generating models built separately with different anthropometric parameters were used to assess the risk of mortality according to children's characteristics. Data from 393 children were included. The overall mortality rate was 10% (39/393). Using logistic regression, the risk factors associated with death were severe wasting (odds ratio [OR]=8.27; 95% confidence interval [95% CI]) [3.79-18], male gender (OR=2.98; 95% CI [1.25-7.1]), dehydration (OR=5.4; 95% CI [2.54-13.43]) in the model using the weight-for-height z-score; male gender (OR=2.5; 95% CI [1.11-5.63]), dehydration (OR=8.43; 95% CI [3.83-18.5]) in the model using the height-for-age z-score; male gender (OR=2.7; 95% CI [1.19-6.24]), dehydration (OR=7.5; 95% CI [3.39-16.76]), severe deficit in the weight-for-age z-score (OR=2.4; 95% CI [1.11-5.63]) in the model using the weight-for-age z-score; and male gender (OR=2.5; 95% CI [1.11-5.63]) and dehydration (OR=8.43; 94% CI [3.83-18.5]) in the last model with mid-upper arm circumference (MUAC). Dehydration and malnutrition were two independent risk factors of death. The protocols addressing dehydration and malnutrition management should be audited and performed systematically for each child's anthropometric measurements at admission. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  19. Crash protectiveness to occupant injury and vehicle damage: An investigation on major car brands.

    PubMed

    Huang, Helai; Li, Chunyang; Zeng, Qiang

    2016-01-01

    This study sets out to investigate vehicles' crash protectiveness on occupant injury and vehicle damage, which can be deemed as an extension of the traditional crash worthiness. A Bayesian bivariate hierarchical ordered logistic (BVHOL) model is developed to estimate the occupant protectiveness (OP) and vehicle protectiveness (VP) of 23 major car brands in Florida, with considering vehicles' crash aggressivity and controlling external factors. The proposed model not only takes over the strength of the existing hierarchical ordered logistic (HOL) model, i.e. specifying the order characteristics of crash outcomes and cross-crash heterogeneities, but also accounts for the correlation between the two crash responses, driver injury and vehicle damage. A total of 7335 two-vehicle-crash records with 14,670 cars involved in Florida are used for the investigation. From the estimation results, it's found that most of the luxury cars such as Cadillac, Volvo and Lexus possess excellent OP and VP while some brands such as KIA and Saturn perform very badly in both aspects. The ranks of the estimated safety performance indices are even compared to the counterparts in Huang et al. study [Huang, H., Hu, S., Abdel-Aty, M., 2014. Indexing crash worthiness and crash aggressivity by major car brands. Safety Science 62, 339-347]. The results show that the rank of occupant protectiveness index (OPI) is relatively coherent with that of crash worthiness index, but the ranks of crash aggressivity index in both studies is more different from each other. Meanwhile, a great discrepancy between the OPI rank and that of vehicle protectiveness index is found. What's more, the results of control variables and hyper-parameters estimation as well as comparison to HOL models with separate or identical threshold errors, demonstrate the validity and advancement of the proposed model and the robustness of the estimated OP and VP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Relationships between common forest metrics and realized impacts of Hurricane Katrina on forest resources in Mississippi

    Treesearch

    Sonja N. Oswalt; Christopher M. Oswalt

    2008-01-01

    This paper compares and contrasts hurricane-related damage recorded across the Mississippi landscape in the 2 years following Katrina with initial damage assessments based on modeled parameters by the USDA Forest Service. Logistic and multiple regressions are used to evaluate the influence of stand characteristics on tree damage probability. Specifically, this paper...

  1. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  2. The Scenario Approach to the Development of Regional Waste Management Systems (Implementation Experience in the Regions of Russia)

    ERIC Educational Resources Information Center

    Fomin, Eugene P.; Alekseev, Audrey A.; Fomina, Natalia E.; Dorozhkin, Vladimir E.

    2016-01-01

    The article illustrates a theoretical approach to scenario modeling of economic indicators of regional waste management system. The method includes a three-iterative algorithm that allows the executive authorities and investors to take a decision on logistics, bulk, technological and economic parameters of the formation of the regional long-term…

  3. Item Response Theory with Covariates (IRT-C): Assessing Item Recovery and Differential Item Functioning for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.

    2016-01-01

    In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…

  4. Parameter Recovery and Classification Accuracy under Conditions of Testlet Dependency: A Comparison of the Traditional 2PL, Testlet, and Bi-Factor Models

    ERIC Educational Resources Information Center

    Koziol, Natalie A.

    2016-01-01

    Testlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for…

  5. Calibration of an Item Bank for the Assessment of Basque Language Knowledge

    ERIC Educational Resources Information Center

    Lopez-Cuadrado, Javier; Perez, Tomas A.; Vadillo, Jose A.; Gutierrez, Julian

    2010-01-01

    The main requisite for a functional computerized adaptive testing system is the need of a calibrated item bank. This text presents the tasks carried out during the calibration of an item bank for assessing knowledge of Basque language. It has been done in terms of the 3-parameter logistic model provided by the item response theory. Besides, this…

  6. Mathematical modelling of the antibiotic-induced morphological transition of Pseudomonas aeruginosa

    PubMed Central

    Keen, Emma; Smith, David J.

    2018-01-01

    Here we formulate a mechanistic mathematical model to describe the growth dynamics of P. aeruginosa in the presence of the β-lactam antibiotic meropenem. The model is mechanistic in the sense that carrying capacity is taken into account through the dynamics of nutrient availability rather than via logistic growth. In accordance with our experimental results we incorporate a sub-population of cells, differing in morphology from the normal bacillary shape of P. aeruginosa bacteria, which we assume have immunity from direct antibiotic action. By fitting this model to experimental data we obtain parameter values that give insight into the growth of a bacterial population that includes different cell morphologies. The analysis of two parameters sets, that produce different long term behaviour, allows us to manipulate the system theoretically in order to explore the advantages of a shape transition that may potentially be a mechanism that allows P. aeruginosa to withstand antibiotic effects. Our results suggest that inhibition of this shape transition may be detrimental to bacterial growth and thus suggest that the transition may be a defensive mechanism implemented by bacterial machinery. In addition to this we provide strong theoretical evidence for the potential therapeutic strategy of using antimicrobial peptides (AMPs) in combination with meropenem. This proposed combination therapy exploits the shape transition as AMPs induce cell lysis by forming pores in the cytoplasmic membrane, which becomes exposed in the spherical cells. PMID:29481562

  7. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less

  8. Nowcasting of Low-Visibility Procedure States with Ordered Logistic Regression at Vienna International Airport

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.

  9. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  10. Classical Mathematical Models for Description and Prediction of Experimental Tumor Growth

    PubMed Central

    Benzekry, Sébastien; Lamont, Clare; Beheshti, Afshin; Tracz, Amanda; Ebos, John M. L.; Hlatky, Lynn; Hahnfeldt, Philip

    2014-01-01

    Despite internal complexity, tumor growth kinetics follow relatively simple laws that can be expressed as mathematical models. To explore this further, quantitative analysis of the most classical of these were performed. The models were assessed against data from two in vivo experimental systems: an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: 1) to determine a statistical model for description of the measurement error, 2) to establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) to assess the models' ability to forecast future tumor growth. The models included in the study comprised the exponential, exponential-linear, power law, Gompertz, logistic, generalized logistic, von Bertalanffy and a model with dynamic carrying capacity. For the breast data, the dynamics were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power, with excellent prediction scores (≥80%) extending out as far as 12 days in the future. For the lung data, the Gompertz and power law models provided the most parsimonious and parametrically identifiable description. However, not one of the models was able to achieve a substantial prediction rate (≥70%) beyond the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement. For instance, forecast success rates went from 14.9% to 62.7% when using the power law model to predict the full future tumor growth curves, using just three data points. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinic. PMID:25167199

  11. Classical mathematical models for description and prediction of experimental tumor growth.

    PubMed

    Benzekry, Sébastien; Lamont, Clare; Beheshti, Afshin; Tracz, Amanda; Ebos, John M L; Hlatky, Lynn; Hahnfeldt, Philip

    2014-08-01

    Despite internal complexity, tumor growth kinetics follow relatively simple laws that can be expressed as mathematical models. To explore this further, quantitative analysis of the most classical of these were performed. The models were assessed against data from two in vivo experimental systems: an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: 1) to determine a statistical model for description of the measurement error, 2) to establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) to assess the models' ability to forecast future tumor growth. The models included in the study comprised the exponential, exponential-linear, power law, Gompertz, logistic, generalized logistic, von Bertalanffy and a model with dynamic carrying capacity. For the breast data, the dynamics were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power, with excellent prediction scores (≥80%) extending out as far as 12 days in the future. For the lung data, the Gompertz and power law models provided the most parsimonious and parametrically identifiable description. However, not one of the models was able to achieve a substantial prediction rate (≥70%) beyond the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement. For instance, forecast success rates went from 14.9% to 62.7% when using the power law model to predict the full future tumor growth curves, using just three data points. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinic.

  12. Mapping Shallow Landslide Slope Inestability at Large Scales Using Remote Sensing and GIS

    NASA Astrophysics Data System (ADS)

    Avalon Cullen, C.; Kashuk, S.; Temimi, M.; Suhili, R.; Khanbilvardi, R.

    2015-12-01

    Rainfall induced landslides are one of the most frequent hazards on slanted terrains. They lead to great economic losses and fatalities worldwide. Most factors inducing shallow landslides are local and can only be mapped with high levels of uncertainty at larger scales. This work presents an attempt to determine slope instability at large scales. Buffer and threshold techniques are used to downscale areas and minimize uncertainties. Four static parameters (slope angle, soil type, land cover and elevation) for 261 shallow rainfall-induced landslides in the continental United States are examined. ASTER GDEM is used as bases for topographical characterization of slope and buffer analysis. Slope angle threshold assessment at the 50, 75, 95, 98, and 99 percentiles is tested locally. Further analysis of each threshold in relation to other parameters is investigated in a logistic regression environment for the continental U.S. It is determined that lower than 95-percentile thresholds under-estimate slope angles. Best regression fit can be achieved when utilizing the 99-threshold slope angle. This model predicts the highest number of cases correctly at 87.0% accuracy. A one-unit rise in the 99-threshold range increases landslide likelihood by 11.8%. The logistic regression model is carried over to ArcGIS where all variables are processed based on their corresponding coefficients. A regional slope instability map for the continental United States is created and analyzed against the available landslide records and their spatial distributions. It is expected that future inclusion of dynamic parameters like precipitation and other proxies like soil moisture into the model will further improve accuracy.

  13. Application of logistic regression for landslide susceptibility zoning of Cekmece Area, Istanbul, Turkey

    NASA Astrophysics Data System (ADS)

    Duman, T. Y.; Can, T.; Gokceoglu, C.; Nefeslioglu, H. A.; Sonmez, H.

    2006-11-01

    As a result of industrialization, throughout the world, cities have been growing rapidly for the last century. One typical example of these growing cities is Istanbul, the population of which is over 10 million. Due to rapid urbanization, new areas suitable for settlement and engineering structures are necessary. The Cekmece area located west of the Istanbul metropolitan area is studied, because the landslide activity is extensive in this area. The purpose of this study is to develop a model that can be used to characterize landslide susceptibility in map form using logistic regression analysis of an extensive landslide database. A database of landslide activity was constructed using both aerial-photography and field studies. About 19.2% of the selected study area is covered by deep-seated landslides. The landslides that occur in the area are primarily located in sandstones with interbedded permeable and impermeable layers such as claystone, siltstone and mudstone. About 31.95% of the total landslide area is located at this unit. To apply logistic regression analyses, a data matrix including 37 variables was constructed. The variables used in the forwards stepwise analyses are different measures of slope, aspect, elevation, stream power index (SPI), plan curvature, profile curvature, geology, geomorphology and relative permeability of lithological units. A total of 25 variables were identified as exerting strong influence on landslide occurrence, and included by the logistic regression equation. Wald statistics values indicate that lithology, SPI and slope are more important than the other parameters in the equation. Beta coefficients of the 25 variables included the logistic regression equation provide a model for landslide susceptibility in the Cekmece area. This model is used to generate a landslide susceptibility map that correctly classified 83.8% of the landslide-prone areas.

  14. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley

    2007-01-01

    Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.

  15. The logistics of choice.

    PubMed

    Killeen, Peter R

    2015-07-01

    The generalized matching law (GML) is reconstructed as a logistic regression equation that privileges no particular value of the sensitivity parameter, a. That value will often approach 1 due to the feedback that drives switching that is intrinsic to most concurrent schedules. A model of that feedback reproduced some features of concurrent data. The GML is a law only in the strained sense that any equation that maps data is a law. The machine under the hood of matching is in all likelihood the very law that was displaced by the Matching Law. It is now time to return the Law of Effect to centrality in our science. © Society for the Experimental Analysis of Behavior.

  16. Multivariate logistic regression for predicting total culturable virus presence at the intake of a potable-water treatment plant: novel application of the atypical coliform/total coliform ratio.

    PubMed

    Black, L E; Brion, G M; Freitas, S J

    2007-06-01

    Predicting the presence of enteric viruses in surface waters is a complex modeling problem. Multiple water quality parameters that indicate the presence of human fecal material, the load of fecal material, and the amount of time fecal material has been in the environment are needed. This paper presents the results of a multiyear study of raw-water quality at the inlet of a potable-water plant that related 17 physical, chemical, and biological indices to the presence of enteric viruses as indicated by cytopathic changes in cell cultures. It was found that several simple, multivariate logistic regression models that could reliably identify observations of the presence or absence of total culturable virus could be fitted. The best models developed combined a fecal age indicator (the atypical coliform [AC]/total coliform [TC] ratio), the detectable presence of a human-associated sterol (epicoprostanol) to indicate the fecal source, and one of several fecal load indicators (the levels of Giardia species cysts, coliform bacteria, and coprostanol). The best fit to the data was found when the AC/TC ratio, the presence of epicoprostanol, and the density of fecal coliform bacteria were input into a simple, multivariate logistic regression equation, resulting in 84.5% and 78.6% accuracies for the identification of the presence and absence of total culturable virus, respectively. The AC/TC ratio was the most influential input variable in all of the models generated, but producing the best prediction required additional input related to the fecal source and the fecal load. The potential for replacing microbial indicators of fecal load with levels of coprostanol was proposed and evaluated by multivariate logistic regression modeling for the presence and absence of virus.

  17. Analysis of the Effects of the Commander’s Battle Positioning on Unit Combat Performance

    DTIC Science & Technology

    1991-03-01

    Analysis ......... .. 58 Logistic Regression Analysis ......... .. 61 Canonical Correlation Analysis ........ .. 62 Descriminant Analysis...entails classifying objects into two or more distinct groups, or responses. Dillon defines descriminant analysis as "deriving linear combinations of the...object given it’s predictor variables. The second objective is, through analysis of the parameters of the descriminant functions, determine those

  18. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cella, Laura, E-mail: laura.cella@cnr.it; Department of Advanced Biomedical Sciences, Federico II University School of Medicine, Naples; Liuzzi, Raffaele

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under themore » receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.« less

  19. A fuzzy mathematical model of West Java population with logistic growth model

    NASA Astrophysics Data System (ADS)

    Nurkholipah, N. S.; Amarti, Z.; Anggriani, N.; Supriatna, A. K.

    2018-03-01

    In this paper we develop a mathematics model of population growth in the West Java Province Indonesia. The model takes the form as a logistic differential equation. We parameterize the model using several triples of data, and choose the best triple which has the smallest Mean Absolute Percentage Error (MAPE). The resulting model is able to predict the historical data with a high accuracy and it also able to predict the future of population number. Predicting the future population is among the important factors that affect the consideration is preparing a good management for the population. Several experiment are done to look at the effect of impreciseness in the data. This is done by considering a fuzzy initial value to the crisp model assuming that the model propagates the fuzziness of the independent variable to the dependent variable. We assume here a triangle fuzzy number representing the impreciseness in the data. We found that the fuzziness may disappear in the long-term. Other scenarios also investigated, such as the effect of fuzzy parameters to the crisp initial value of the population. The solution of the model is obtained numerically using the fourth-order Runge-Kutta scheme.

  20. Vitamin D and Male Sexual Function: A Transversal and Longitudinal Study.

    PubMed

    Tirabassi, Giacomo; Sudano, Maurizio; Salvio, Gianmaria; Cutini, Melissa; Muscogiuri, Giovanna; Corona, Giovanni; Balercia, Giancarlo

    2018-01-01

    The effects of vitamin D on sexual function are very unclear. Therefore, we aimed at evaluating the possible association between vitamin D and sexual function and at assessing the influence of vitamin D administration on sexual function. We retrospectively studied 114 men by evaluating clinical, biochemical, and sexual parameters. A subsample ( n = 41) was also studied longitudinally before and after vitamin D replacement therapy. In the whole sample, after performing logistic regression models, higher levels of 25(OH) vitamin D were significantly associated with high values of total testosterone and of all the International Index of Erectile Function (IIEF) questionnaire parameters. On the other hand, higher levels of total testosterone were positively and significantly associated with high levels of erectile function and IIEF total score. After vitamin D replacement therapy, total and free testosterone increased and erectile function improved, whereas other sexual parameters did not change significantly. At logistic regression analysis, higher levels of vitamin D increase (Δ-) were significantly associated with high values of Δ-erectile function after adjustment for Δ-testosterone. Vitamin D is important for the wellness of male sexual function, and vitamin D administration improves sexual function.

  1. Evaluation of Cox's model and logistic regression for matched case-control data with time-dependent covariates: a simulation study.

    PubMed

    Leffondré, Karen; Abrahamowicz, Michal; Siemiatycki, Jack

    2003-12-30

    Case-control studies are typically analysed using the conventional logistic model, which does not directly account for changes in the covariate values over time. Yet, many exposures may vary over time. The most natural alternative to handle such exposures would be to use the Cox model with time-dependent covariates. However, its application to case-control data opens the question of how to manipulate the risk sets. Through a simulation study, we investigate how the accuracy of the estimates of Cox's model depends on the operational definition of risk sets and/or on some aspects of the time-varying exposure. We also assess the estimates obtained from conventional logistic regression. The lifetime experience of a hypothetical population is first generated, and a matched case-control study is then simulated from this population. We control the frequency, the age at initiation, and the total duration of exposure, as well as the strengths of their effects. All models considered include a fixed-in-time covariate and one or two time-dependent covariate(s): the indicator of current exposure and/or the exposure duration. Simulation results show that none of the models always performs well. The discrepancies between the odds ratios yielded by logistic regression and the 'true' hazard ratio depend on both the type of the covariate and the strength of its effect. In addition, it seems that logistic regression has difficulty separating the effects of inter-correlated time-dependent covariates. By contrast, each of the two versions of Cox's model systematically induces either a serious under-estimation or a moderate over-estimation bias. The magnitude of the latter bias is proportional to the true effect, suggesting that an improved manipulation of the risk sets may eliminate, or at least reduce, the bias. Copyright 2003 JohnWiley & Sons, Ltd.

  2. Applications of Evolutionary Technology to Manufacturing and Logistics Systems : State-of-the Art Survey

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin

    Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.

  3. The evolution of Zipf's law indicative of city development

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang

    2016-02-01

    Zipf's law of city-size distributions can be expressed by three types of mathematical models: one-parameter form, two-parameter form, and three-parameter form. The one-parameter and one of the two-parameter models are familiar to urban scientists. However, the three-parameter model and another type of two-parameter model have not attracted attention. This paper is devoted to exploring the conditions and scopes of application of these Zipf models. By mathematical reasoning and empirical analysis, new discoveries are made as follows. First, if the size distribution of cities in a geographical region cannot be described with the one- or two-parameter model, maybe it can be characterized by the three-parameter model with a scaling factor and a scale-translational factor. Second, all these Zipf models can be unified by hierarchical scaling laws based on cascade structure. Third, the patterns of city-size distributions seem to evolve from three-parameter mode to two-parameter mode, and then to one-parameter mode. Four-year census data of Chinese cities are employed to verify the three-parameter Zipf's law and the corresponding hierarchical structure of rank-size distributions. This study is revealing for people to understand the scientific laws of social systems and the property of urban development.

  4. Multilevel joint competing risk models

    NASA Astrophysics Data System (ADS)

    Karunarathna, G. H. S.; Sooriyarachchi, M. R.

    2017-09-01

    Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).

  5. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  6. Can arsenic occurrence rate in bedrock aquifers be predicted?

    USGS Publications Warehouse

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 μg L–1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 μg L–1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology.

  7. Can arsenic occurrence rates in bedrock aquifers be predicted?

    PubMed Central

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 µg L−1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 µg L−1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology. PMID:22260208

  8. School Exits in the Milwaukee Parental Choice Program: Evidence of a Marketplace?

    ERIC Educational Resources Information Center

    Ford, Michael

    2011-01-01

    This article examines whether the large number of school exits from the Milwaukee school voucher program is evidence of a marketplace. Two logistic regression and multinomial logistic regression models tested the relation between the inability to draw large numbers of voucher students and the ability for a private school to remain viable. Data on…

  9. Evaluation of logistic regression models and effect of covariates for case-control study in RNA-Seq analysis.

    PubMed

    Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L

    2017-02-06

    Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.

  10. Can Predictive Modeling Identify Head and Neck Oncology Patients at Risk for Readmission?

    PubMed

    Manning, Amy M; Casper, Keith A; Peter, Kay St; Wilson, Keith M; Mark, Jonathan R; Collar, Ryan M

    2018-05-01

    Objective Unplanned readmission within 30 days is a contributor to health care costs in the United States. The use of predictive modeling during hospitalization to identify patients at risk for readmission offers a novel approach to quality improvement and cost reduction. Study Design Two-phase study including retrospective analysis of prospectively collected data followed by prospective longitudinal study. Setting Tertiary academic medical center. Subjects and Methods Prospectively collected data for patients undergoing surgical treatment for head and neck cancer from January 2013 to January 2015 were used to build predictive models for readmission within 30 days of discharge using logistic regression, classification and regression tree (CART) analysis, and random forests. One model (logistic regression) was then placed prospectively into the discharge workflow from March 2016 to May 2016 to determine the model's ability to predict which patients would be readmitted within 30 days. Results In total, 174 admissions had descriptive data. Thirty-two were excluded due to incomplete data. Logistic regression, CART, and random forest predictive models were constructed using the remaining 142 admissions. When applied to 106 consecutive prospective head and neck oncology patients at the time of discharge, the logistic regression model predicted readmissions with a specificity of 94%, a sensitivity of 47%, a negative predictive value of 90%, and a positive predictive value of 62% (odds ratio, 14.9; 95% confidence interval, 4.02-55.45). Conclusion Prospectively collected head and neck cancer databases can be used to develop predictive models that can accurately predict which patients will be readmitted. This offers valuable support for quality improvement initiatives and readmission-related cost reduction in head and neck cancer care.

  11. Pharmacokinetic-Pharmacodynamic Modeling of Unboosted Atazanavir in a Cohort of Stable HIV-Infected Patients

    PubMed Central

    Baudry, Thomas; Gagnieu, Marie-Claude; Boibieux, André; Livrozet, Jean-Michel; Peyramond, Dominique; Tod, Michel; Ferry, Tristan

    2013-01-01

    Limited data on the pharmacokinetics and pharmacodynamics (PK/PD) of unboosted atazanavir (uATV) in treatment-experienced patients are available. The aim of this work was to study the PK/PD of unboosted atazanavir in a cohort of HIV-infected patients. Data were available for 58 HIV-infected patients (69 uATV-based regimens). Atazanavir concentrations were analyzed by using a population approach, and the relationship between atazanavir PK and clinical outcome was examined using logistic regression. The final PK model was a linear one-compartment model with a mixture absorption model to account for two subgroups of absorbers. The mean (interindividual variability) of population PK parameters were as follows: clearance, 13.4 liters/h (40.7%), volume of distribution, 71.1 liters (29.7%), and fraction of regular absorbers, 0.49. Seven subjects experienced virological failure after switch to uATV. All of them were identified as low absorbers in the PK modeling. The absorption rate constant (0.38 ± 0.20 versus 0.75 ± 0.28 h−1; P = 0.002) and ATV exposure (area under the concentration-time curve from 0 to 24 h [AUC0–24], 10.3 ± 2.1 versus 22.4 ± 11.2 mg · h · liter−1; P = 0.001) were significantly lower in patients with virological failure than in patients without failure. In the logistic regression analysis, both the absorption rate constant and ATV trough concentration significantly influenced the probability of virological failure. A significant relationship between ATV pharmacokinetics and virological response was observed in a cohort of HIV patients who were administered unboosted atazanavir. This study also suggests that twice-daily administration of uATV may optimize drug therapy. PMID:23147727

  12. How Should We Assess the Fit of Rasch-Type Models? Approximating the Power of Goodness-of-Fit Statistics in Categorical Data Analysis

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Montano, Rosa

    2013-01-01

    We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model…

  13. Parameters and kinetics of olive mill wastewater dephenolization by immobilized Rhodotorula glutinis cells.

    PubMed

    Bozkoyunlu, Gaye; Takaç, Serpil

    2014-01-01

    Olive mill wastewater (OMW) with total phenol (TP) concentration range of 300-1200 mg/L was treated with alginate-immobilized Rhodotorula glutinis cells in batch system. The effects of pellet properties (diameter, alginate concentration and cell loading (CL)) and operational parameters (initial TP concentration, agitation rate and reusability of pellets) on dephenolization of OMW were studied. Up to 87% dephenolization was obtained after 120 h biodegradations. The utilization number of pellets increased with the addition of calcium ions into the biodegradation medium. The overall effectiveness factors calculated for different conditions showed that diffusional limitations arising from pellet size and pellet composition could be neglected. Mass transfer limitations appeared to be more effective at high substrate concentrations and low agitation rates. The parameters of logistic model for growth kinetics of R. glutinis in OMW were estimated at different initial phenol concentrations of OMW by curve-fitting of experimental data with the model.

  14. Temperature based Restricted Boltzmann Machines

    NASA Astrophysics Data System (ADS)

    Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping

    2016-01-01

    Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwasniewski, Bartosz K

    The construction of reversible extensions of dynamical systems presented in a previous paper by the author and A.V. Lebedev is enhanced, so that it applies to arbitrary mappings (not necessarily with open range). It is based on calculating the maximal ideal space of C*-algebras that extends endomorphisms to partial automorphisms via partial isometric representations, and involves a new set of 'parameters' (the role of parameters is played by chosen sets or ideals). As model examples, we give a thorough description of reversible extensions of logistic maps and a classification of systems associated with compression of unitaries generating homeomorphisms of themore » circle. Bibliography: 34 titles.« less

  16. Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.

    PubMed

    Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin

    2017-11-01

    The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Influences of Vehicle Size and Mass and Selected Driver Factors on Odds of Driver Fatality

    PubMed Central

    Padmanaban, Jeya

    2003-01-01

    Research was undertaken to determine vehicle size parameters influencing driver fatality odds, independent of mass, in two-vehicle collisions. Forty vehicle parameters were evaluated for 1,500 vehicle groupings. Logistic regression analyses show driver factors (belt use, age, drinking) collectively contribute more to fatality odds than vehicle factors, and that mass is the most important vehicular parameter influencing fatality odds for all crash configurations. In car crashes, other vehicle parameters with statistical significance had a second order effect compared to mass. In light truck-to-car crashes, “vehicle type-striking vehicle is light truck” was the most important parameter after mass, followed by vehicle height and bumper height, with second order effect. To understand the importance of “vehicle type” variable, further investigation of vehicle “stiffness” and other passenger car/light truck differentiating parameters is warranted. PMID:12941244

  18. Using occupancy modeling and logistic regression to assess the distribution of shrimp species in lowland streams, Costa Rica: Does regional groundwater create favorable habitat?

    USGS Publications Warehouse

    Snyder, Marcia; Freeman, Mary C.; Purucker, S. Thomas; Pringle, Catherine M.

    2016-01-01

    Freshwater shrimps are an important biotic component of tropical ecosystems. However, they can have a low probability of detection when abundances are low. We sampled 3 of the most common freshwater shrimp species, Macrobrachium olfersii, Macrobrachium carcinus, and Macrobrachium heterochirus, and used occupancy modeling and logistic regression models to improve our limited knowledge of distribution of these cryptic species by investigating both local- and landscape-scale effects at La Selva Biological Station in Costa Rica. Local-scale factors included substrate type and stream size, and landscape-scale factors included presence or absence of regional groundwater inputs. Capture rates for 2 of the sampled species (M. olfersii and M. carcinus) were sufficient to compare the fit of occupancy models. Occupancy models did not converge for M. heterochirus, but M. heterochirus had high enough occupancy rates that logistic regression could be used to model the relationship between occupancy rates and predictors. The best-supported models for M. olfersii and M. carcinus included conductivity, discharge, and substrate parameters. Stream size was positively correlated with occupancy rates of all 3 species. High stream conductivity, which reflects the quantity of regional groundwater input into the stream, was positively correlated with M. olfersii occupancy rates. Boulder substrates increased occupancy rate of M. carcinus and decreased the detection probability of M. olfersii. Our models suggest that shrimp distribution is driven by factors that function at local (substrate and discharge) and landscape (conductivity) scales.

  19. Two Approaches to Using Client Projects in the College Classroom

    ERIC Educational Resources Information Center

    Cooke, Lynne; Williams, Sean

    2004-01-01

    Client projects are an opportunity for universities to create long-lasting, mutually beneficial relationships with businesses through an academic consultancy service. This article discusses the rationale and logistics of two models for conducting such projects. One model, used at Clemson University, is a formal academic consultancy service in…

  20. Global Positioning System (GPS) Precipitable Water in Forecasting Lightning at Spaceport Canaveral

    NASA Technical Reports Server (NTRS)

    Kehrer, Kristen C.; Graf, Brian; Roeder, William

    2006-01-01

    This paper evaluates the use of precipitable water (PW) from Global Positioning System (GPS) in lightning prediction. Additional independent verification of an earlier model is performed. This earlier model used binary logistic regression with the following four predictor variables optimally selected from a candidate list of 23 candidate predictors: the current precipitable water value for a given time of the day, the change in GPS-PW over the past 9 hours, the KIndex, and the electric field mill value. This earlier model was not optimized for any specific forecast interval, but showed promise for 6 hour and 1.5 hour forecasts. Two new models were developed and verified. These new models were optimized for two operationally significant forecast intervals. The first model was optimized for the 0.5 hour lightning advisories issued by the 45th Weather Squadron. An additional 1.5 hours was allowed for sensor dwell, communication, calculation, analysis, and advisory decision by the forecaster. Therefore the 0.5 hour advisory model became a 2 hour forecast model for lightning within the 45th Weather Squadron advisory areas. The second model was optimized for major ground processing operations supported by the 45th Weather Squadron, which can require lightning forecasts with a lead-time of up to 7.5 hours. Using the same 1.5 lag as in the other new model, this became a 9 hour forecast model for lightning within 37 km (20 NM)) of the 45th Weather Squadron advisory areas. The two new models were built using binary logistic regression from a list of 26 candidate predictor variables: the current GPS-PW value, the change of GPS-PW over 0.5 hour increments from 0.5 to 12 hours, and the K-index. The new 2 hour model found the following for predictors to be statistically significant, listed in decreasing order of contribution to the forecast: the 0.5 hour change in GPS-PW, the 7.5 hour change in GPS-PW, the current GPS-PW value, and the KIndex. The new 9 hour forecast model found the following five independent variables to be statistically significant, listed in decreasing order of contribution to the forecast: the current GPSPW value, the 8.5 hour change in GPS-PW, the 3.5 hour change in GPS-PW, the 12 hour change in GPS-PW, and the K-Index. In both models, the GPS-PW parameters had better correlation to the lightning forecast than the K-Index, a widely used thunderstorm index. Possible future improvements to this study are discussed.

  1. Blood oxygen level dependent magnetic resonance imaging for detecting pathological patterns in lupus nephritis patients: a preliminary study using a decision tree model.

    PubMed

    Shi, Huilan; Jia, Junya; Li, Dong; Wei, Li; Shang, Wenya; Zheng, Zhenfeng

    2018-02-09

    Precise renal histopathological diagnosis will guide therapy strategy in patients with lupus nephritis. Blood oxygen level dependent (BOLD) magnetic resonance imaging (MRI) has been applicable noninvasive technique in renal disease. This current study was performed to explore whether BOLD MRI could contribute to diagnose renal pathological pattern. Adult patients with lupus nephritis renal pathological diagnosis were recruited for this study. Renal biopsy tissues were assessed based on the lupus nephritis ISN/RPS 2003 classification. The Blood oxygen level dependent magnetic resonance imaging (BOLD-MRI) was used to obtain functional magnetic resonance parameter, R2* values. Several functions of R2* values were calculated and used to construct algorithmic models for renal pathological patterns. In addition, the algorithmic models were compared as to their diagnostic capability. Both Histopathology and BOLD MRI were used to examine a total of twelve patients. Renal pathological patterns included five classes III (including 3 as class III + V) and seven classes IV (including 4 as class IV + V). Three algorithmic models, including decision tree, line discriminant, and logistic regression, were constructed to distinguish the renal pathological pattern of class III and class IV. The sensitivity of the decision tree model was better than that of the line discriminant model (71.87% vs 59.48%, P < 0.001) and inferior to that of the Logistic regression model (71.87% vs 78.71%, P < 0.001). The specificity of decision tree model was equivalent to that of the line discriminant model (63.87% vs 63.73%, P = 0.939) and higher than that of the logistic regression model (63.87% vs 38.0%, P < 0.001). The Area under the ROC curve (AUROCC) of the decision tree model was greater than that of the line discriminant model (0.765 vs 0.629, P < 0.001) and logistic regression model (0.765 vs 0.662, P < 0.001). BOLD MRI is a useful non-invasive imaging technique for the evaluation of lupus nephritis. Decision tree models constructed using functions of R2* values may facilitate the prediction of renal pathological patterns.

  2. Use of genetic programming, logistic regression, and artificial neural nets to predict readmission after coronary artery bypass surgery.

    PubMed

    Engoren, Milo; Habib, Robert H; Dooner, John J; Schwann, Thomas A

    2013-08-01

    As many as 14 % of patients undergoing coronary artery bypass surgery are readmitted within 30 days. Readmission is usually the result of morbidity and may lead to death. The purpose of this study is to develop and compare statistical and genetic programming models to predict readmission. Patients were divided into separate Construction and Validation populations. Using 88 variables, logistic regression, genetic programs, and artificial neural nets were used to develop predictive models. Models were first constructed and tested on the Construction populations, then validated on the Validation population. Areas under the receiver operator characteristic curves (AU ROC) were used to compare the models. Two hundred and two patients (7.6 %) in the 2,644 patient Construction group and 216 (8.0 %) of the 2,711 patient Validation group were re-admitted within 30 days of CABG surgery. Logistic regression predicted readmission with AU ROC = .675 ± .021 in the Construction group. Genetic programs significantly improved the accuracy, AU ROC = .767 ± .001, p < .001). Artificial neural nets were less accurate with AU ROC = 0.597 ± .001 in the Construction group. Predictive accuracy of all three techniques fell in the Validation group. However, the accuracy of genetic programming (AU ROC = .654 ± .001) was still trivially but statistically non-significantly better than that of the logistic regression (AU ROC = .644 ± .020, p = .61). Genetic programming and logistic regression provide alternative methods to predict readmission that are similarly accurate.

  3. Identification and validation of a logistic regression model for predicting serious injuries associated with motor vehicle crashes.

    PubMed

    Kononen, Douglas W; Flannagan, Carol A C; Wang, Stewart C

    2011-01-01

    A multivariate logistic regression model, based upon National Automotive Sampling System Crashworthiness Data System (NASS-CDS) data for calendar years 1999-2008, was developed to predict the probability that a crash-involved vehicle will contain one or more occupants with serious or incapacitating injuries. These vehicles were defined as containing at least one occupant coded with an Injury Severity Score (ISS) of greater than or equal to 15, in planar, non-rollover crash events involving Model Year 2000 and newer cars, light trucks, and vans. The target injury outcome measure was developed by the Centers for Disease Control and Prevention (CDC)-led National Expert Panel on Field Triage in their recent revision of the Field Triage Decision Scheme (American College of Surgeons, 2006). The parameters to be used for crash injury prediction were subsequently specified by the National Expert Panel. Model input parameters included: crash direction (front, left, right, and rear), change in velocity (delta-V), multiple vs. single impacts, belt use, presence of at least one older occupant (≥ 55 years old), presence of at least one female in the vehicle, and vehicle type (car, pickup truck, van, and sport utility). The model was developed using predictor variables that may be readily available, post-crash, from OnStar-like telematics systems. Model sensitivity and specificity were 40% and 98%, respectively, using a probability cutpoint of 0.20. The area under the receiver operator characteristic (ROC) curve for the final model was 0.84. Delta-V (mph), seat belt use and crash direction were the most important predictors of serious injury. Due to the complexity of factors associated with rollover-related injuries, a separate screening algorithm is needed to model injuries associated with this crash mode. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Correlation of Body Mass Index and Serum Parameters With Ultrasonographic Grade of Fatty Change in Non-alcoholic Fatty Liver Disease

    PubMed Central

    Abangah, Ghobad; Yousefi, Atefeh; Asadollahi, Rouhangiz; Veisani, Yousef; Rahimifar, Paria; Alizadeh, Sajjad

    2014-01-01

    Background: Non-alcoholic fatty liver disease (NAFLD) is a common liver disease in the western population and expanding disease in the world. Pathological changes in fatty liver are like alcohol liver damage, which can lead to end-stage liver disease. The prevalence of NAFLD in obese or overweight people is higher than general population, and it seems that people with high Body Mass Index (BMI) or abnormality in some laboratory tests are more susceptible for severe fatty liver and high grade of NAFLD in ultrasonography (U.S). Objectives: This study aimed to evaluate the correlation of BMI and laboratory tests with NAFLD in ultrasonography. Materials and Methods: During a multi-step process, we selected two-hundred and thirteen cases from four hundred and eighteen patients with NAFLD. Laboratory tests performed included: ALT, AST, FBS, Triglyceride and cholesterol levels, hepatitis B surface antigen, hepatitis C antibody, ceruloplasmin, serum iron, TIBC, transferrin saturation, ferritin, AMA, ANA, ANTI LKM1, serum protein electrophoresis, TSH, anti TTG (IgA). BMI and ultrasonography for 213 patients were performed, and then data was analyzed. These parameters and grades of ultrasonography were compared with the values obtained using one way ANOVA. An ordinal logistic regression model was used to estimate the probability of ultrasonography grade. The Statistical Package for the Social Science program (SPSS, version 16.0) was used for data analysis. Results: Two-hundred and thirteen cases including 140 male and 73 female, were studied. In general, 72.3% of patients were overweight and obese. Post-hoc tests showed that only BMI (P < 0.001) and TG (P < 0.011) among variables had statistically significant associations with ultrasonography grade (USG), and ordinal logistic regression model showed that BMI and AST were the best predictors. Discussion: Our results suggest that in patients with NAFLD, BMI and TG are most effective factors in severity of fatty liver disease and ultrasonography grade (USG). On the other hand, BMI as a predictor can be helpful. But, AST has not been a reliable finding, because it changes in many conditions. PMID:24719704

  5. Correlation of Body Mass Index and Serum Parameters With Ultrasonographic Grade of Fatty Change in Non-alcoholic Fatty Liver Disease.

    PubMed

    Abangah, Ghobad; Yousefi, Atefeh; Asadollahi, Rouhangiz; Veisani, Yousef; Rahimifar, Paria; Alizadeh, Sajjad

    2014-01-01

    Non-alcoholic fatty liver disease (NAFLD) is a common liver disease in the western population and expanding disease in the world. Pathological changes in fatty liver are like alcohol liver damage, which can lead to end-stage liver disease. The prevalence of NAFLD in obese or overweight people is higher than general population, and it seems that people with high Body Mass Index (BMI) or abnormality in some laboratory tests are more susceptible for severe fatty liver and high grade of NAFLD in ultrasonography (U.S). This study aimed to evaluate the correlation of BMI and laboratory tests with NAFLD in ultrasonography. During a multi-step process, we selected two-hundred and thirteen cases from four hundred and eighteen patients with NAFLD. Laboratory tests performed included: ALT, AST, FBS, Triglyceride and cholesterol levels, hepatitis B surface antigen, hepatitis C antibody, ceruloplasmin, serum iron, TIBC, transferrin saturation, ferritin, AMA, ANA, ANTI LKM1, serum protein electrophoresis, TSH, anti TTG (IgA). BMI and ultrasonography for 213 patients were performed, and then data was analyzed. These parameters and grades of ultrasonography were compared with the values obtained using one way ANOVA. An ordinal logistic regression model was used to estimate the probability of ultrasonography grade. The Statistical Package for the Social Science program (SPSS, version 16.0) was used for data analysis. Two-hundred and thirteen cases including 140 male and 73 female, were studied. In general, 72.3% of patients were overweight and obese. Post-hoc tests showed that only BMI (P < 0.001) and TG (P < 0.011) among variables had statistically significant associations with ultrasonography grade (USG), and ordinal logistic regression model showed that BMI and AST were the best predictors. Our results suggest that in patients with NAFLD, BMI and TG are most effective factors in severity of fatty liver disease and ultrasonography grade (USG). On the other hand, BMI as a predictor can be helpful. But, AST has not been a reliable finding, because it changes in many conditions.

  6. Why preferring parametric forecasting to nonparametric methods?

    PubMed

    Jabot, Franck

    2015-05-07

    A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A general equation to obtain multiple cut-off scores on a test from multinomial logistic regression.

    PubMed

    Bersabé, Rosa; Rivas, Teresa

    2010-05-01

    The authors derive a general equation to compute multiple cut-offs on a total test score in order to classify individuals into more than two ordinal categories. The equation is derived from the multinomial logistic regression (MLR) model, which is an extension of the binary logistic regression (BLR) model to accommodate polytomous outcome variables. From this analytical procedure, cut-off scores are established at the test score (the predictor variable) at which an individual is as likely to be in category j as in category j+1 of an ordinal outcome variable. The application of the complete procedure is illustrated by an example with data from an actual study on eating disorders. In this example, two cut-off scores on the Eating Attitudes Test (EAT-26) scores are obtained in order to classify individuals into three ordinal categories: asymptomatic, symptomatic and eating disorder. Diagnoses were made from the responses to a self-report (Q-EDD) that operationalises DSM-IV criteria for eating disorders. Alternatives to the MLR model to set multiple cut-off scores are discussed.

  8. Predictors of Gleason Score (GS) upgrading on subsequent prostatectomy: a single Institution study in a cohort of patients with GS 6

    PubMed Central

    Mehta, Vikas; Rycyna, Kevin; Baesens, Bart MM; Barkan, Güliz A; Paner, Gladell P; Flanigan, Robert C; Wojcik, Eva M; Venkataraman, Girish

    2012-01-01

    Background Biopsy Gleason score (bGS) remains an important prognostic indicator for adverse outcomes in Prostate Cancer (PCA). In the light of recent studies purporting difference in prognostic outcomes for the subgroups of GS7 group (primary Gleason pattern 4 vs. 3), upgrading of a bGS of 6 to a GS≥7 has serious implications. We sought to identify pre-operative factors associated with upgrading in a cohort of GS6 patients who underwent prostatectomy. Design We identified 281 cases of GS6 PCA on biopsy with subsequent prostatectomies. Using data on pre-operative variables (age, PSA, biopsy pathology parameters), logistic regression models (LRM) were developed to identify factors that could be used to predict upgrading to GS≥7 on subsequent prostatectomy. A decision tree (DT) was constructed. Results 92 of 281 cases (32.7%) were upgraded on subsequent prostatectomy. LRM identified a model with two variables with statistically significant ability to predict upgrading, including pre-biopsy PSA (Odds Ratio 8.66; 2.03-37.49, 95% CI) and highest percentage of cancer at any single biopsy site (Odds Ratio 1.03, 1.01-1.05, 95% CI). This two-parameter model yielded an area under curve of 0.67. The decision tree was constructed using only 3 leave nodes; with a test set classification accuracy of 70%. Conclusions A simplistic model using clinical and biopsy data is able to predict the likelihood of upgrading of GS with an acceptable level of certainty. External validation of these findings along with development of a nomogram will aid in better stratifying the cohort of low risk patients as based on the GS. PMID:22949931

  9. Preoperative predictive model of recovery of urinary continence after radical prostatectomy.

    PubMed

    Matsushita, Kazuhito; Kent, Matthew T; Vickers, Andrew J; von Bodman, Christian; Bernstein, Melanie; Touijer, Karim A; Coleman, Jonathan A; Laudone, Vincent T; Scardino, Peter T; Eastham, James A; Akin, Oguz; Sandhu, Jaspreet S

    2015-10-01

    To build a predictive model of urinary continence recovery after radical prostatectomy (RP) that incorporates magnetic resonance imaging (MRI) parameters and clinical data. We conducted a retrospective review of data from 2,849 patients who underwent pelvic staging MRI before RP from November 2001 to June 2010. We used logistic regression to evaluate the association between each MRI variable and continence at 6 or 12 months, adjusting for age, body mass index (BMI) and American Society of Anesthesiologists (ASA) score, and then used multivariable logistic regression to create our model. A nomogram was constructed using the multivariable logistic regression models. In all, 68% (1,742/2,559) and 82% (2,205/2,689) regained function at 6 and 12 months, respectively. In the base model, age, BMI and ASA score were significant predictors of continence at 6 or 12 months on univariate analysis (P < 0.005). Among the preoperative MRI measurements, membranous urethral length, which showed great significance, was incorporated into the base model to create the full model. For continence recovery at 6 months, the addition of membranous urethral length increased the area under the curve (AUC) to 0.664 for the validation set, an increase of 0.064 over the base model. For continence recovery at 12 months, the AUC was 0.674, an increase of 0.085 over the base model. Using our model, the likelihood of continence recovery increases with membranous urethral length and decreases with age, BMI and ASA score. This model could be used for patient counselling and for the identification of patients at high risk for urinary incontinence in whom to study changes in operative technique that improve urinary function after RP. © 2015 The Authors BJU International © 2015 BJU International Published by John Wiley & Sons Ltd.

  10. Habitat features and predictive habitat modeling for the Colorado chipmunk in southern New Mexico

    USGS Publications Warehouse

    Rivieccio, M.; Thompson, B.C.; Gould, W.R.; Boykin, K.G.

    2003-01-01

    Two subspecies of Colorado chipmunk (state threatened and federal species of concern) occur in southern New Mexico: Tamias quadrivittatus australis in the Organ Mountains and T. q. oscuraensis in the Oscura Mountains. We developed a GIS model of potentially suitable habitat based on vegetation and elevation features, evaluated site classifications of the GIS model, and determined vegetation and terrain features associated with chipmunk occurrence. We compared GIS model classifications with actual vegetation and elevation features measured at 37 sites. At 60 sites we measured 18 habitat variables regarding slope, aspect, tree species, shrub species, and ground cover. We used logistic regression to analyze habitat variables associated with chipmunk presence/absence. All (100%) 37 sample sites (28 predicted suitable, 9 predicted unsuitable) were classified correctly by the GIS model regarding elevation and vegetation. For 28 sites predicted suitable by the GIS model, 18 sites (64%) appeared visually suitable based on habitat variables selected from logistic regression analyses, of which 10 sites (36%) were specifically predicted as suitable habitat via logistic regression. We detected chipmunks at 70% of sites deemed suitable via the logistic regression models. Shrub cover, tree density, plant proximity, presence of logs, and presence of rock outcrop were retained in the logistic model for the Oscura Mountains; litter, shrub cover, and grass cover were retained in the logistic model for the Organ Mountains. Evaluation of predictive models illustrates the need for multi-stage analyses to best judge performance. Microhabitat analyses indicate prospective needs for different management strategies between the subspecies. Sensitivities of each population of the Colorado chipmunk to natural and prescribed fire suggest that partial burnings of areas inhabited by Colorado chipmunks in southern New Mexico may be beneficial. These partial burnings may later help avoid a fire that could substantially reduce habitat of chipmunks over a mountain range.

  11. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  12. Short National Early Warning Score - Developing a Modified Early Warning Score.

    PubMed

    Luís, Leandro; Nunes, Carla

    2017-12-11

    Early Warning Score (EWS) systems have been developed for detecting hospital patients clinical deterioration. Many studies show that a National Early Warning Score (NEWS) performs well in discriminating survival from death in acute medical and surgical hospital wards. NEWS is validated for Portugal and is available for use. A simpler EWS system may help to reduce the risk of error, as well as increase clinician compliance with the tool. The aim of the study was to evaluate whether a simplified NEWS model will improve use and data collection. We evaluated the ability of single and aggregated parameters from the NEWS model to detect patients' clinical deterioration in the 24h prior to an outcome. There were 2 possible outcomes: Survival vs Unanticipated intensive care unit admission or death. We used binary logistic regression models and Receiver Operating Characteristic Curves (ROC) to evaluate the parameters' performance in discriminating among the outcomes for a sample of patients from 6 Portuguese hospital wards. NEWS presented an excellent discriminating capability (Area under the Curve of ROC (AUCROC)=0.944). Temperature and systolic blood pressure (SBP) parameters did not contribute significantly to the model. We developed two different models, one without temperature, and the other by removing temperature and SBP (M2). Both models had an excellent discriminating capability (AUCROC: 0.965; 0.903, respectively) and a good predictive power in the optimum threshold of the ROC curve. The 3 models revealed similar discriminant capabilities. Although the use of SBP is not clearly evident in the identification of clinical deterioration, it is recognized as an important vital sign. We recommend the use of the first new model, as its simplicity may help to improve adherence and use by health care workers. Copyright © 2017 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved.

  13. The relationship between the C-statistic of a risk-adjustment model and the accuracy of hospital report cards: a Monte Carlo Study.

    PubMed

    Austin, Peter C; Reeves, Mathew J

    2013-03-01

    Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.

  14. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    PubMed Central

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  15. Effect of extreme data loss on heart rate signals quantified by entropy analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Wang, Jun; Li, Jin; Liu, Dazhao

    2015-02-01

    The phenomenon of data loss always occurs in the analysis of large databases. Maintaining the stability of analysis results in the event of data loss is very important. In this paper, we used a segmentation approach to generate a synthetic signal that is randomly wiped from data according to the Gaussian distribution and the exponential distribution of the original signal. Then, the logistic map is used as verification. Finally, two methods of measuring entropy-base-scale entropy and approximate entropy-are comparatively analyzed. Our results show the following: (1) Two key parameters-the percentage and the average length of removed data segments-can change the sequence complexity according to logistic map testing. (2) The calculation results have preferable stability for base-scale entropy analysis, which is not sensitive to data loss. (3) The loss percentage of HRV signals should be controlled below the range (p = 30 %), which can provide useful information in clinical applications.

  16. Breast lesion characterization using whole-lesion histogram analysis with stretched-exponential diffusion model.

    PubMed

    Liu, Chunling; Wang, Kun; Li, Xiaodan; Zhang, Jine; Ding, Jie; Spuhler, Karl; Duong, Timothy; Liang, Changhong; Huang, Chuan

    2018-06-01

    Diffusion-weighted imaging (DWI) has been studied in breast imaging and can provide more information about diffusion, perfusion and other physiological interests than standard pulse sequences. The stretched-exponential model has previously been shown to be more reliable than conventional DWI techniques, but different diagnostic sensitivities were found from study to study. This work investigated the characteristics of whole-lesion histogram parameters derived from the stretched-exponential diffusion model for benign and malignant breast lesions, compared them with conventional apparent diffusion coefficient (ADC), and further determined which histogram metrics can be best used to differentiate malignant from benign lesions. This was a prospective study. Seventy females were included in the study. Multi-b value DWI was performed on a 1.5T scanner. Histogram parameters of whole lesions for distributed diffusion coefficient (DDC), heterogeneity index (α), and ADC were calculated by two radiologists and compared among benign lesions, ductal carcinoma in situ (DCIS), and invasive carcinoma confirmed by pathology. Nonparametric tests were performed for comparisons among invasive carcinoma, DCIS, and benign lesions. Comparisons of receiver operating characteristic (ROC) curves were performed to show the ability to discriminate malignant from benign lesions. The majority of histogram parameters (mean/min/max, skewness/kurtosis, 10-90 th percentile values) from DDC, α, and ADC were significantly different among invasive carcinoma, DCIS, and benign lesions. DDC 10% (area under curve [AUC] = 0.931), ADC 10% (AUC = 0.893), and α mean (AUC = 0.787) were found to be the best metrics in differentiating benign from malignant tumors among all histogram parameters derived from ADC and α, respectively. The combination of DDC 10% and α mean , using logistic regression, yielded the highest sensitivity (90.2%) and specificity (95.5%). DDC 10% and α mean derived from the stretched-exponential model provides more information and better diagnostic performance in differentiating malignancy from benign lesions than ADC parameters derived from a monoexponential model. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1701-1710. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Testing item response theory invariance of the standardized Quality-of-life Disease Impact Scale (QDIS(®)) in acute coronary syndrome patients: differential functioning of items and test.

    PubMed

    Deng, Nina; Anatchkova, Milena D; Waring, Molly E; Han, Kyung T; Ware, John E

    2015-08-01

    The Quality-of-life (QOL) Disease Impact Scale (QDIS(®)) standardizes the content and scoring of QOL impact attributed to different diseases using item response theory (IRT). This study examined the IRT invariance of the QDIS-standardized IRT parameters in an independent sample. The differential functioning of items and test (DFIT) of a static short-form (QDIS-7) was examined across two independent sources: patients hospitalized for acute coronary syndrome (ACS) in the TRACE-CORE study (N = 1,544) and chronically ill US adults in the QDIS standardization sample. "ACS-specific" IRT item parameters were calibrated and linearly transformed to compare to "standardized" IRT item parameters. Differences in IRT model-expected item, scale and theta scores were examined. The DFIT results were also compared in a standard logistic regression differential item functioning analysis. Item parameters estimated in the ACS sample showed lower discrimination parameters than the standardized discrimination parameters, but only small differences were found for thresholds parameters. In DFIT, results on the non-compensatory differential item functioning index (range 0.005-0.074) were all below the threshold of 0.096. Item differences were further canceled out at the scale level. IRT-based theta scores for ACS patients using standardized and ACS-specific item parameters were highly correlated (r = 0.995, root-mean-square difference = 0.09). Using standardized item parameters, ACS patients scored one-half standard deviation higher (indicating greater QOL impact) compared to chronically ill adults in the standardization sample. The study showed sufficient IRT invariance to warrant the use of standardized IRT scoring of QDIS-7 for studies comparing the QOL impact attributed to acute coronary disease and other chronic conditions.

  18. Modelling aspects regarding the control in 13C isotope separation column

    NASA Astrophysics Data System (ADS)

    Boca, M. L.

    2016-08-01

    Carbon represents the fourth most abundant chemical element in the world, having two stable and one radioactive isotope. The 13Carbon isotopes, with a natural abundance of 1.1%, plays an important role in numerous applications, such as the study of human metabolism changes, molecular structure studies, non-invasive respiratory tests, Alzheimer tests, air pollution and global warming effects on plants [9] A manufacturing control system manages the internal logistics in a production system and determines the routings of product instances, the assignment of workers and components, the starting of the processes on not-yet-finished product instances. Manufacturing control does not control the manufacturing processes themselves, but has to cope with the consequences of the processing results (e.g. the routing of products to a repair station). In this research it was fulfilled some UML (Unified Modelling Language) diagrams for modelling the C13 Isotope Separation column, implement in STARUML program. Being a critical process and needing a good control and supervising, the critical parameters in the column, temperature and pressure was control using some PLC (Programmable logic controller) and it was made some graphic analyze for this to observe some critical situation than can affect the separation process. The main parameters that need to be control are: -The liquid nitrogen (N2) level in the condenser. -The electrical power supplied to the boiler. -The vacuum pressure.

  19. New geospatial approaches for efficiently mapping forest biomass logistics at high resolution over large areas

    Treesearch

    John Hogland; Nathaniel Anderson; Woodam Chung

    2018-01-01

    Adequate biomass feedstock supply is an important factor in evaluating the financial feasibility of alternative site locations for bioenergy facilities and for maintaining profitability once a facility is built. We used newly developed spatial analysis and logistics software to model the variables influencing feedstock supply and to estimate and map two components of...

  20. Two underestimated threats in food transportation: mould and acceleration

    PubMed Central

    Janssen, S.; Pankoke, I.; Klus, K.; Schmitt, K.; Stephan, U.; Wöllenstein, J.

    2014-01-01

    Two important parameters are often neglected in the monitoring of perishable goods during transport: mould contamination of fresh food and the influence of acceleration or vibration on the quality of a product. We assert the claim that it is necessary to focus research on these two topics in the context of intelligent logistics in this opinion paper. Further, the technical possibilities for future measurement systems are discussed. By measuring taste deviations, we verified the effect on the quality of beer at different vibration frequencies. The practical importance is shown by examining transport routes and market shares. The general feasibility of a mobile mould detection system is established by examining the measurement resolution of semiconductor sensors for mould-related gases. Furthermore, as an alternative solution, we present a concept for a miniaturized and automated culture-medium-based system. Although there is a lack of related research to date, new efforts can make a vital contribution to the reduction of losses in the logistic chains for several products. PMID:24797139

  1. Two underestimated threats in food transportation: mould and acceleration.

    PubMed

    Janssen, S; Pankoke, I; Klus, K; Schmitt, K; Stephan, U; Wöllenstein, J

    2014-06-13

    Two important parameters are often neglected in the monitoring of perishable goods during transport: mould contamination of fresh food and the influence of acceleration or vibration on the quality of a product. We assert the claim that it is necessary to focus research on these two topics in the context of intelligent logistics in this opinion paper. Further, the technical possibilities for future measurement systems are discussed. By measuring taste deviations, we verified the effect on the quality of beer at different vibration frequencies. The practical importance is shown by examining transport routes and market shares. The general feasibility of a mobile mould detection system is established by examining the measurement resolution of semiconductor sensors for mould-related gases. Furthermore, as an alternative solution, we present a concept for a miniaturized and automated culture-medium-based system. Although there is a lack of related research to date, new efforts can make a vital contribution to the reduction of losses in the logistic chains for several products.

  2. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE PAGES

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    2017-04-24

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  3. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  4. Computerized adaptive testing: the capitalization on chance problem.

    PubMed

    Olea, Julio; Barrada, Juan Ramón; Abad, Francisco J; Ponsoda, Vicente; Cuevas, Lara

    2012-03-01

    This paper describes several simulation studies that examine the effects of capitalization on chance in the selection of items and the ability estimation in CAT, employing the 3-parameter logistic model. In order to generate different estimation errors for the item parameters, the calibration sample size was manipulated (N = 500, 1000 and 2000 subjects) as was the ratio of item bank size to test length (banks of 197 and 788 items, test lengths of 20 and 40 items), both in a CAT and in a random test. Results show that capitalization on chance is particularly serious in CAT, as revealed by the large positive bias found in the small sample calibration conditions. For broad ranges of theta, the overestimation of the precision (asymptotic Se) reaches levels of 40%, something that does not occur with the RMSE (theta). The problem is greater as the item bank size to test length ratio increases. Potential solutions were tested in a second study, where two exposure control methods were incorporated into the item selection algorithm. Some alternative solutions are discussed.

  5. Filtering data from the collaborative initial glaucoma treatment study for improved identification of glaucoma progression.

    PubMed

    Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C

    2013-12-21

    Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.

  6. Using GA-Ridge regression to select hydro-geological parameters influencing groundwater pollution vulnerability.

    PubMed

    Ahn, Jae Joon; Kim, Young Min; Yoo, Keunje; Park, Joonhong; Oh, Kyong Joo

    2012-11-01

    For groundwater conservation and management, it is important to accurately assess groundwater pollution vulnerability. This study proposed an integrated model using ridge regression and a genetic algorithm (GA) to effectively select the major hydro-geological parameters influencing groundwater pollution vulnerability in an aquifer. The GA-Ridge regression method determined that depth to water, net recharge, topography, and the impact of vadose zone media were the hydro-geological parameters that influenced trichloroethene pollution vulnerability in a Korean aquifer. When using these selected hydro-geological parameters, the accuracy was improved for various statistical nonlinear and artificial intelligence (AI) techniques, such as multinomial logistic regression, decision trees, artificial neural networks, and case-based reasoning. These results provide a proof of concept that the GA-Ridge regression is effective at determining influential hydro-geological parameters for the pollution vulnerability of an aquifer, and in turn, improves the AI performance in assessing groundwater pollution vulnerability.

  7. Hyperbolastic growth models: theory and application

    PubMed Central

    Tabatabai, Mohammad; Williams, David Keith; Bursac, Zoran

    2005-01-01

    Background Mathematical models describing growth kinetics are very important for predicting many biological phenomena such as tumor volume, speed of disease progression, and determination of an optimal radiation and/or chemotherapy schedule. Growth models such as logistic, Gompertz, Richards, and Weibull have been extensively studied and applied to a wide range of medical and biological studies. We introduce a class of three and four parameter models called "hyperbolastic models" for accurately predicting and analyzing self-limited growth behavior that occurs e.g. in tumors. To illustrate the application and utility of these models and to gain a more complete understanding of them, we apply them to two sets of data considered in previously published literature. Results The results indicate that volumetric tumor growth follows the principle of hyperbolastic growth model type III, and in both applications at least one of the newly proposed models provides a better fit to the data than the classical models used for comparison. Conclusion We have developed a new family of growth models that predict the volumetric growth behavior of multicellular tumor spheroids with a high degree of accuracy. We strongly believe that the family of hyperbolastic models can be a valuable predictive tool in many areas of biomedical and epidemiological research such as cancer or stem cell growth and infectious disease outbreaks. PMID:15799781

  8. Evolution and revolution: gauging the impact of technological and technical innovation on Olympic performance.

    PubMed

    Balmer, Nigel; Pleasence, Pascoe; Nevill, Alan

    2012-01-01

    A number of studies have pointed to a plateauing of athletic performance, with the suggestion that further improvements will need to be driven by revolutions in technology or technique. In the present study, we examine post-war men's Olympic performance in jumping events (pole vault, long jump, high jump, triple jump) to determine whether performance has indeed plateaued and to present techniques, derived from models of human growth, for assessing the impact of technological and technical innovation over time (logistic and double logistic models of growth). Significantly, two of the events involve well-documented changes in technology (pole material in pole vault) or technique (the Fosbury Flop in high jump), while the other two do not. We find that in all four cases, performance appears to have plateaued and that no further "general" improvement should be expected. In the case of high jump, the double logistic model provides a convenient method for modelling and quantifying a performance intervention (in this case the Fosbury Flop). However, some shortcomings are revealed for pole vault, where evolutionary post-war improvements and innovation (fibre glass poles) were concurrent, preventing their separate identification in the model. In all four events, it is argued that further general growth in performance will indeed need to rely predominantly on technological or technical innovation.

  9. Comparison of naïve Bayes and logistic regression for computer-aided diagnosis of breast masses using ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Cary, Theodore W.; Cwanger, Alyssa; Venkatesh, Santosh S.; Conant, Emily F.; Sehgal, Chandra M.

    2012-03-01

    This study compares the performance of two proven but very different machine learners, Naïve Bayes and logistic regression, for differentiating malignant and benign breast masses using ultrasound imaging. Ultrasound images of 266 masses were analyzed quantitatively for shape, echogenicity, margin characteristics, and texture features. These features along with patient age, race, and mammographic BI-RADS category were used to train Naïve Bayes and logistic regression classifiers to diagnose lesions as malignant or benign. ROC analysis was performed using all of the features and using only a subset that maximized information gain. Performance was determined by the area under the ROC curve, Az, obtained from leave-one-out cross validation. Naïve Bayes showed significant variation (Az 0.733 +/- 0.035 to 0.840 +/- 0.029, P < 0.002) with the choice of features, but the performance of logistic regression was relatively unchanged under feature selection (Az 0.839 +/- 0.029 to 0.859 +/- 0.028, P = 0.605). Out of 34 features, a subset of 6 gave the highest information gain: brightness difference, margin sharpness, depth-to-width, mammographic BI-RADs, age, and race. The probabilities of malignancy determined by Naïve Bayes and logistic regression after feature selection showed significant correlation (R2= 0.87, P < 0.0001). The diagnostic performance of Naïve Bayes and logistic regression can be comparable, but logistic regression is more robust. Since probability of malignancy cannot be measured directly, high correlation between the probabilities derived from two basic but dissimilar models increases confidence in the predictive power of machine learning models for characterizing solid breast masses on ultrasound.

  10. Comparison of Logistic Regression and Artificial Neural Network in Low Back Pain Prediction: Second National Health Survey

    PubMed Central

    Parsaeian, M; Mohammad, K; Mahmoudi, M; Zeraati, H

    2012-01-01

    Background: The purpose of this investigation was to compare empirically predictive ability of an artificial neural network with a logistic regression in prediction of low back pain. Methods: Data from the second national health survey were considered in this investigation. This data includes the information of low back pain and its associated risk factors among Iranian people aged 15 years and older. Artificial neural network and logistic regression models were developed using a set of 17294 data and they were validated in a test set of 17295 data. Hosmer and Lemeshow recommendation for model selection was used in fitting the logistic regression. A three-layer perceptron with 9 inputs, 3 hidden and 1 output neurons was employed. The efficiency of two models was compared by receiver operating characteristic analysis, root mean square and -2 Loglikelihood criteria. Results: The area under the ROC curve (SE), root mean square and -2Loglikelihood of the logistic regression was 0.752 (0.004), 0.3832 and 14769.2, respectively. The area under the ROC curve (SE), root mean square and -2Loglikelihood of the artificial neural network was 0.754 (0.004), 0.3770 and 14757.6, respectively. Conclusions: Based on these three criteria, artificial neural network would give better performance than logistic regression. Although, the difference is statistically significant, it does not seem to be clinically significant. PMID:23113198

  11. Comparison of logistic regression and artificial neural network in low back pain prediction: second national health survey.

    PubMed

    Parsaeian, M; Mohammad, K; Mahmoudi, M; Zeraati, H

    2012-01-01

    The purpose of this investigation was to compare empirically predictive ability of an artificial neural network with a logistic regression in prediction of low back pain. Data from the second national health survey were considered in this investigation. This data includes the information of low back pain and its associated risk factors among Iranian people aged 15 years and older. Artificial neural network and logistic regression models were developed using a set of 17294 data and they were validated in a test set of 17295 data. Hosmer and Lemeshow recommendation for model selection was used in fitting the logistic regression. A three-layer perceptron with 9 inputs, 3 hidden and 1 output neurons was employed. The efficiency of two models was compared by receiver operating characteristic analysis, root mean square and -2 Loglikelihood criteria. The area under the ROC curve (SE), root mean square and -2Loglikelihood of the logistic regression was 0.752 (0.004), 0.3832 and 14769.2, respectively. The area under the ROC curve (SE), root mean square and -2Loglikelihood of the artificial neural network was 0.754 (0.004), 0.3770 and 14757.6, respectively. Based on these three criteria, artificial neural network would give better performance than logistic regression. Although, the difference is statistically significant, it does not seem to be clinically significant.

  12. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    NASA Astrophysics Data System (ADS)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  13. A Comparison of Four Linear Equating Methods for the Common-Item Nonequivalent Groups Design Using Simulation Methods. ACT Research Report Series, 2013 (2)

    ERIC Educational Resources Information Center

    Topczewski, Anna; Cui, Zhongmin; Woodruff, David; Chen, Hanwei; Fang, Yu

    2013-01-01

    This paper investigates four methods of linear equating under the common item nonequivalent groups design. Three of the methods are well known: Tucker, Angoff-Levine, and Congeneric-Levine. A fourth method is presented as a variant of the Congeneric-Levine method. Using simulation data generated from the three-parameter logistic IRT model we…

  14. Evaluation of MELD score and Maddrey discriminant function for mortality prediction in patients with alcoholic hepatitis.

    PubMed

    Monsanto, Pedro; Almeida, Nuno; Lrias, Clotilde; Pina, Jos Eduardo; Sofia, Carlos

    2013-01-01

    Maddrey discriminant function (DF) is the traditional model for evaluating the severity and prognosis in alcoholic hepatitis (AH). However, MELD has also been used for this purpose. We aimed to determine the predictive parameters and compare the ability of Maddrey DF and MELD to predict short-term mortality in patients with AH. Retrospective study of 45 patients admitted in our department with AH between 2000 and 2010. Demographic, clinical and laboratory parameters were collected. MELD and Maddrey DF were calculated on admission. Short-term mortality was assessed at 30 and 90 days. Student t-test, χ2 test, univariate analysis, logistic regression and receiver operating characteristic curves were performed. Thirty-day and 90-day mortality was 27% and 42%, respectively. In multivariate analysis, Maddrey DF was the only independent predictor of mortality for these two periods. Receiver operating characteristic curves for Maddrey DF revealed an excellent discriminatory ability to predict 30-day and 90-day mortality for a Maddrey DF greater than 65 and 60, respectively. Discriminatory ability to predict 30-day and 90-day mortality for MELD was low. AH remains associated with a high short-term mortality. Maddrey DF is a more valuable model than MELD to predict short-term mortality in patients with AH.

  15. Predicting stress urinary incontinence during pregnancy: combination of pelvic floor ultrasound parameters and clinical factors.

    PubMed

    Chen, Ling; Luo, Dan; Yu, Xiajuan; Jin, Mei; Cai, Wenzhi

    2018-05-12

    The aim of this study was to develop and validate a predictive tool that combining pelvic floor ultrasound parameters and clinical factors for stress urinary incontinence during pregnancy. A total of 535 women in first or second trimester were included for an interview and transperineal ultrasound assessment from two hospitals. Imaging data sets were analyzed offline to assess for bladder neck vertical position, urethra angles (α, β, and γ angles), hiatal area and bladder neck funneling. All significant continuous variables at univariable analysis were analyzed by receiver-operating characteristics. Three multivariable logistic models were built on clinical factor, and combined with ultrasound parameters. The final predictive model with best performance and fewest variables was selected to establish a nomogram. Internal and external validation of the nomogram were performed by both discrimination represented by C-index and calibration measured by Hosmer-Lemeshow test. A decision curve analysis was conducted to determine the clinical utility of the nomogram. After excluding 14 women with invalid data, 521 women were analyzed. β angle, γ angle and hiatal area had limited predictive value for stress urinary incontinence during pregnancy, with area under curves of 0.558-0.648. The final predictive model included body mass index gain since pregnancy, constipation, previous delivery mode, β angle at rest, and bladder neck funneling. The nomogram based on the final model showed good discrimination with a C-index of 0.789 and satisfactory calibration (P=0.828), both of which were supported by external validation. Decision curve analysis showed that the nomogram was clinical useful. The nomogram incorporating both the pelvic floor ultrasound parameters and clinical factors has been validated to show good discrimination and calibration, and could be an important tool for stress urinary incontinence risk prediction at an early stage of pregnancy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  17. Improving size estimates of open animal populations by incorporating information on age

    USGS Publications Warehouse

    Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.

    2003-01-01

    Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.

  18. Sequential Testing of Hypotheses Concerning the Reliability of a System Modeled by a Two-Parameter Weibull Distribution.

    DTIC Science & Technology

    1981-12-01

    CONCERNING THE RELIABILITY OF A SYSTEM MODELED BY A TWO-PARAMETER WEIBULL DISTRIBUTION THESIS AFIT/GOR/MA/81D-8 Philippe A. Lussier 2nd Lt USAF... MODELED BY A TWO-PARAMETER WEIBULL DISTRIBUTION THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology...repetitions are used for these test procedures. vi Sequential Testing of Hypotheses Concerning the Reliability of a System Modeled by a Two-Parameter

  19. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lonchampt, J.; Fessart, K.

    2013-07-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in themore » Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description of the features of the software a test case is presented showing the influence of the optimization algorithm parameters on its efficiency to find an optimal investments planning. (authors)« less

  20. Empirical research on coordination evaluation and sustainable development mechanism of regional logistics and new-type urbanization: a panel data analysis from 2000 to 2015 for Liaoning Province in China.

    PubMed

    Sun, Qiang

    2017-06-01

    As the largest developing country in the world, China has witnessed fast-paced urbanization over the past three decades with rapid economic growth. In fact, urbanization has been not only shown to promote economic growth and improve the livelihood of people but also can increase demands of regional logistics. Therefore, a better understanding of the relationship between urbanization and regional logistics is important for China's future sustainable development. The development of urban residential area and heterogeneous, modern society as well regional logistics are running two abreast. The regional logistics can promote the development of new-type urbanization jointly by promoting industrial concentration and logistics demand, enhancing the residents' quality of life and improving the infrastructure and logistics technology. In this paper, the index system and evaluation model for evaluating the development of regional logistics and the new-type urbanization are constructed. Further, the econometric analysis is utilized such as correlation analysis, co-integration test, and error correction model to explore relationships of the new-type urbanization development and regional logistics development in Liaoning Province. The results showed that there was a long-term stable equilibrium relationship between the new-type urbanization and regional logistics. The findings have important implications for Chinese policymakers that on the path towards a sustainable urbanization and regional reverse, this must be taken into consideration. The paper concludes providing some strategies that might be helpful to the policymakers in formulating development policies for sustainable urbanization.

  1. On estimating probability of presence from use-availability or presence-background data.

    PubMed

    Phillips, Steven J; Elith, Jane

    2013-06-01

    A fundamental ecological modeling task is to estimate the probability that a species is present in (or uses) a site, conditional on environmental variables. For many species, available data consist of "presence" data (locations where the species [or evidence of it] has been observed), together with "background" data, a random sample of available environmental conditions. Recently published papers disagree on whether probability of presence is identifiable from such presence-background data alone. This paper aims to resolve the disagreement, demonstrating that additional information is required. We defined seven simulated species representing various simple shapes of response to environmental variables (constant, linear, convex, unimodal, S-shaped) and ran five logistic model-fitting methods using 1000 presence samples and 10 000 background samples; the simulations were repeated 100 times. The experiment revealed a stark contrast between two groups of methods: those based on a strong assumption that species' true probability of presence exactly matches a given parametric form had highly variable predictions and much larger RMS error than methods that take population prevalence (the fraction of sites in which the species is present) as an additional parameter. For six species, the former group grossly under- or overestimated probability of presence. The cause was not model structure or choice of link function, because all methods were logistic with linear and, where necessary, quadratic terms. Rather, the experiment demonstrates that an estimate of prevalence is not just helpful, but is necessary (except in special cases) for identifying probability of presence. We therefore advise against use of methods that rely on the strong assumption, due to Lele and Keim (recently advocated by Royle et al.) and Lancaster and Imbens. The methods are fragile, and their strong assumption is unlikely to be true in practice. We emphasize, however, that we are not arguing against standard statistical methods such as logistic regression, generalized linear models, and so forth, none of which requires the strong assumption. If probability of presence is required for a given application, there is no panacea for lack of data. Presence-background data must be augmented with an additional datum, e.g., species' prevalence, to reliably estimate absolute (rather than relative) probability of presence.

  2. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    PubMed

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  3. Comparison of Logistic Regression and Random Forests techniques for shallow landslide susceptibility assessment in Giampilieri (NE Sicily, Italy)

    NASA Astrophysics Data System (ADS)

    Trigila, Alessandro; Iadanza, Carla; Esposito, Carlo; Scarascia-Mugnozza, Gabriele

    2015-11-01

    The aim of this work is to define reliable susceptibility models for shallow landslides using Logistic Regression and Random Forests multivariate statistical techniques. The study area, located in North-East Sicily, was hit on October 1st 2009 by a severe rainstorm (225 mm of cumulative rainfall in 7 h) which caused flash floods and more than 1000 landslides. Several small villages, such as Giampilieri, were hit with 31 fatalities, 6 missing persons and damage to buildings and transportation infrastructures. Landslides, mainly types such as earth and debris translational slides evolving into debris flows, were triggered on steep slopes and involved colluvium and regolith materials which cover the underlying metamorphic bedrock. The work has been carried out with the following steps: i) realization of a detailed event landslide inventory map through field surveys coupled with observation of high resolution aerial colour orthophoto; ii) identification of landslide source areas; iii) data preparation of landslide controlling factors and descriptive statistics based on a bivariate method (Frequency Ratio) to get an initial overview on existing relationships between causative factors and shallow landslide source areas; iv) choice of criteria for the selection and sizing of the mapping unit; v) implementation of 5 multivariate statistical susceptibility models based on Logistic Regression and Random Forests techniques and focused on landslide source areas; vi) evaluation of the influence of sample size and type of sampling on results and performance of the models; vii) evaluation of the predictive capabilities of the models using ROC curve, AUC and contingency tables; viii) comparison of model results and obtained susceptibility maps; and ix) analysis of temporal variation of landslide susceptibility related to input parameter changes. Models based on Logistic Regression and Random Forests have demonstrated excellent predictive capabilities. Land use and wildfire variables were found to have a strong control on the occurrence of very rapid shallow landslides.

  4. Predictive variables for the occurrence of early clinical mastitis in primiparous Holstein cows under field conditions in France.

    PubMed Central

    Barnouin, J; Chassagne, M

    2001-01-01

    Holstein heifers from 47 dairy herds in France were enrolled in a field study to determine predictors for clinical mastitis within the first month of lactation. Precalving and calving variables (biochemical, hematological, hygienic, and disease indicators) were collected. Early clinical mastitis (ECM) predictive variables were analyzed by using a multiple logistic regression model (99 cows with ECM vs. 571 without clinical mastitis throughout the first lactation). Two variables were associated with a higher risk of ECM: a) difficult calving and b) medium and high white blood cell (WBC) counts in late gestation. Two prepartum indicators were associated with a lower ECM risk: a) medium and high serum concentrations of immunoglobulin G1 (IgG1) and b) high percentage of eosinophils among white blood cells. Calving difficulty and certain biological blood parameters (IgG1, eosinophils) could represent predictors that would merit further experimental studies, with the aim of designing programs for reducing the risk of clinical mastitis in the first lactation. PMID:11195522

  5. Computational tools for exact conditional logistic regression.

    PubMed

    Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P

    Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.

  6. London Measure of Unplanned Pregnancy: guidance for its use as an outcome measure

    PubMed Central

    Hall, Jennifer A; Barrett, Geraldine; Copas, Andrew; Stephenson, Judith

    2017-01-01

    Background The London Measure of Unplanned Pregnancy (LMUP) is a psychometrically validated measure of the degree of intention of a current or recent pregnancy. The LMUP is increasingly being used worldwide, and can be used to evaluate family planning or preconception care programs. However, beyond recommending the use of the full LMUP scale, there is no published guidance on how to use the LMUP as an outcome measure. Ordinal logistic regression has been recommended informally, but studies published to date have all used binary logistic regression and dichotomized the scale at different cut points. There is thus a need for evidence-based guidance to provide a standardized methodology for multivariate analysis and to enable comparison of results. This paper makes recommendations for the regression method for analysis of the LMUP as an outcome measure. Materials and methods Data collected from 4,244 pregnant women in Malawi were used to compare five regression methods: linear, logistic with two cut points, and ordinal logistic with either the full or grouped LMUP score. The recommendations were then tested on the original UK LMUP data. Results There were small but no important differences in the findings across the regression models. Logistic regression resulted in the largest loss of information, and assumptions were violated for the linear and ordinal logistic regression. Consequently, robust standard errors were used for linear regression and a partial proportional odds ordinal logistic regression model attempted. The latter could only be fitted for grouped LMUP score. Conclusion We recommend the linear regression model with robust standard errors to make full use of the LMUP score when analyzed as an outcome measure. Ordinal logistic regression could be considered, but a partial proportional odds model with grouped LMUP score may be required. Logistic regression is the least-favored option, due to the loss of information. For logistic regression, the cut point for un/planned pregnancy should be between nine and ten. These recommendations will standardize the analysis of LMUP data and enhance comparability of results across studies. PMID:28435343

  7. A hybrid inventory management system respondingto regular demand and surge demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammad S. Roni; Mingzhou Jin; Sandra D. Eksioglu

    2014-06-01

    This paper proposes a hybrid policy for a stochastic inventory system facing regular demand and surge demand. The combination of two different demand patterns can be observed in many areas, such as healthcare inventory and humanitarian supply chain management. The surge demand has a lower arrival rate but higher demand volume per arrival. The solution approach proposed in this paper incorporates the level crossing method and mixed integer programming technique to optimize the hybrid inventory policy with both regular orders and emergency orders. The level crossing method is applied to obtain the equilibrium distributions of inventory levels under a givenmore » policy. The model is further transformed into a mixed integer program to identify an optimal hybrid policy. A sensitivity analysis is conducted to investigate the impact of parameters on the optimal inventory policy and minimum cost. Numerical results clearly show the benefit of using the proposed hybrid inventory model. The model and solution approach could help healthcare providers or humanitarian logistics providers in managing their emergency supplies in responding to surge demands.« less

  8. Predicting the Presence of Scyphozoan Jellyfish in the Gulf of Mexico Using a Biophysical Model

    NASA Astrophysics Data System (ADS)

    Aleksa, K. T.; Nero, R. W.; Wiggert, J. D.; Graham, W. M.

    2016-02-01

    The study and quantification of jellyfish (cnidarian medusae and ctenophores) is difficult due to their fragile body plan and a composition similar to their environment. The development of a predictive biophysical jellyfish model would be the first of its kind for the Gulf of Mexico and could provide assistance in ecological research and human interactions. In this study, the collection data of two scyphozoan medusae, Chrysaora quinquecirrha and Aurelia spp., were extracted from SEAMAP trawling surveys and were used to determine biophysical predictors for the presence of large jellyfish medusae in the Gulf of Mexico. Both in situ and remote sensing measurements from 2003 to 2013 were obtained. Logistic regressions were then applied to 27 biophysical parameters derived from these data to explore and determine significant predictors for the presence of medusae. Significant predictors identified by this analysis included water temperature, chlorophyll a, turbidity, distance from shore, and salinity. Future application for this model include foraging assessment of gelatinous predators as well as possible near real time monitoring of the distribution and movement of these medusae in the Gulf of Mexico.

  9. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  10. Vaginal birth and de novo stress incontinence: Relative contributions of urethral dysfunction and mobility

    PubMed Central

    DeLancey, John O. L.; Miller, Janis M.; Kearney, Rohna; Howard, Denise; Reddy, Pranathi; Umek, Wolfgang; Guire, Kenneth E.; Margulies, Rebecca U.; Ashton-Miller, James A.

    2009-01-01

    Background Vaginal birth increases the chance a woman will develop stress incontinence. This study evaluates the relative contributions of urethral mobility and urethral function to stress incontinence. Methods This is a case-control study with group matching. Eighty primiparous women with self-reported new stress incontinence 9–12 months postpartum were compared to 80 primiparous continent controls to identify impairments specific to stress incontinence. Eighty nulliparous continent controls were evaluated as a comparison group to allow us to determine birth-related changes not associated with stress incontinence. Urethral function was measured with urethral profilometry, and vesical neck mobility was assessed with ultrasound and Q-tip test. Urethral sphincter anatomy and mobility were evaluated using MRI. The association between urethral closure pressure, vesical neck movement, and incontinence were explored using logistic regression. Results Urethral closure pressure in primiparous incontinent women (62.9 +/− 25.2 s.d. cm H20) was lower than in primiparous continent women (83.0 +/− 21.0, p<0.001; effect size d= 0.91) who were similar to nulliparous women (90.3 +/− 25.0, p=0.09). Vesical neck movement measured during cough with ultrasound was the mobility parameter most associated with stress incontinence; 15.6 +/− 6.2 mm in incontinent women versus 10.9 +/− 6.2 in primiparous continent women (p < 0.0001, d = 0.75) or nulliparas (9.9 +/− 5.0, p=0.33). Logistic regression disclosed the two-variable model (max-rescaled R2 =0.37, p < 0.0001) was more strongly associated with stress incontinence than either single variable models, urethral closure pressure (R2 = 0.25, p <0.0001) or vesical neck movement (R2 = 0.16 p < 0.0001). Conclusions Lower maximal urethral closure pressure is the parameter most associated with de novo stress incontinence after first vaginal birth followed by vesical neck mobility. PMID:17666611

  11. Comparison of four methods for deriving hospital standardised mortality ratios from a single hierarchical logistic regression model.

    PubMed

    Mohammed, Mohammed A; Manktelow, Bradley N; Hofer, Timothy P

    2016-04-01

    There is interest in deriving case-mix adjusted standardised mortality ratios so that comparisons between healthcare providers, such as hospitals, can be undertaken in the controversial belief that variability in standardised mortality ratios reflects quality of care. Typically standardised mortality ratios are derived using a fixed effects logistic regression model, without a hospital term in the model. This fails to account for the hierarchical structure of the data - patients nested within hospitals - and so a hierarchical logistic regression model is more appropriate. However, four methods have been advocated for deriving standardised mortality ratios from a hierarchical logistic regression model, but their agreement is not known and neither do we know which is to be preferred. We found significant differences between the four types of standardised mortality ratios because they reflect a range of underlying conceptual issues. The most subtle issue is the distinction between asking how an average patient fares in different hospitals versus how patients at a given hospital fare at an average hospital. Since the answers to these questions are not the same and since the choice between these two approaches is not obvious, the extent to which profiling hospitals on mortality can be undertaken safely and reliably, without resolving these methodological issues, remains questionable. © The Author(s) 2012.

  12. Relationship between muscle mass and physical performance: is it the same in older adults with weak muscle strength?

    PubMed

    Kim, Kyoung-Eun; Jang, Soong-Nang; Lim, Soo; Park, Young Joo; Paik, Nam-Jong; Kim, Ki Woong; Jang, Hak Chul; Lim, Jae-Young

    2012-11-01

    the relationship between muscle mass and physical performance has not been consistent among studies. to clarify the relationship between muscle mass and physical performance in older adults with weak muscle strength. cross-sectional analysis using the baseline data of 542 older men and women from the Korean Longitudinal Study on Health and Aging. dual X-ray absorptiometry, isokinetic dynamometer and the Short Physical Performance Battery (SPPB) were performed. Two muscle mass parameters, appendicular skeletal mass divided by weight (ASM/Wt) and by height squared (ASM/Ht(2)), were measured. We divided the participants into a lower-quartile (L25) group and an upper-three-quartiles (H75) group based on the knee-extensor peak torque. Correlation analysis and logistic regression models were used to assess the association between muscle mass and low physical performance, defined as SPPB scores <9, after controlling for confounders. in the L25 group, no correlation between mass and SPPB was detected, whereas the correlation between peak torque and SPPB was significant and higher than that in the H75 group. Results from the logistic models also showed no association between muscle mass and SPPB in the L25 group, whereas muscle mass was associated with SPPB in the H75 group. muscle mass was not associated with physical performance in weak older adults. Measures of muscle strength may be of greater clinical importance in weak older adults than is muscle mass per se.

  13. Regression analysis for solving diagnosis problem of children's health

    NASA Astrophysics Data System (ADS)

    Cherkashina, Yu A.; Gerget, O. M.

    2016-04-01

    The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.

  14. Vitamin D levels and their associations with survival and major disease outcomes in a large cohort of patients with chronic graft-vs-host disease

    PubMed Central

    Katić, Mašenjka; Pirsl, Filip; Steinberg, Seth M.; Dobbin, Marnie; Curtis, Lauren M.; Pulanić, Dražen; Desnica, Lana; Titarenko, Irina; Pavletic, Steven Z.

    2016-01-01

    Aim To identify the factors associated with vitamin D status in patients with chronic graft-vs-host disease (cGVHD) and evaluate the association between serum vitamin D (25(OH)D) levels and cGVHD characteristics and clinical outcomes defined by the National Institutes of Health (NIH) criteria. Methods 310 cGVHD patients enrolled in the NIH cGVHD natural history study (clinicaltrials.gov: NCT00092235) were analyzed. Univariate analysis and multiple logistic regression were used to determine the associations between various parameters and 25(OH)D levels, dichotomized into categorical variables: ≤20 and >20 ng/mL, and as a continuous parameter. Multiple logistic regression was used to develop a predictive model for low vitamin D. Survival analysis and association between cGVHD outcomes and 25(OH)D as a continuous as well as categorical variable: ≤20 and >20 ng/mL; <50 and ≥50 ng/mL, and among three ordered categories: ≤20, 20-50, and ≥50 ng/mL, was performed. PMID:27374829

  15. Impact of correlation of predictors on discrimination of risk models in development and external populations.

    PubMed

    Kundu, Suman; Mazumdar, Madhu; Ferket, Bart

    2017-04-19

    The area under the ROC curve (AUC) of risk models is known to be influenced by differences in case-mix and effect size of predictors. The impact of heterogeneity in correlation among predictors has however been under investigated. We sought to evaluate how correlation among predictors affects the AUC in development and external populations. We simulated hypothetical populations using two different methods based on means, standard deviations, and correlation of two continuous predictors. In the first approach, the distribution and correlation of predictors were assumed for the total population. In the second approach, these parameters were modeled conditional on disease status. In both approaches, multivariable logistic regression models were fitted to predict disease risk in individuals. Each risk model developed in a population was validated in the remaining populations to investigate external validity. For both approaches, we observed that the magnitude of the AUC in the development and external populations depends on the correlation among predictors. Lower AUCs were estimated in scenarios of both strong positive and negative correlation, depending on the direction of predictor effects and the simulation method. However, when adjusted effect sizes of predictors were specified in the opposite directions, increasingly negative correlation consistently improved the AUC. AUCs in external validation populations were higher or lower than in the derivation cohort, even in the presence of similar predictor effects. Discrimination of risk prediction models should be assessed in various external populations with different correlation structures to make better inferences about model generalizability.

  16. Motorcycles entering from access points and merging with traffic on primary roads in Malaysia: behavioral and road environment influence on the occurrence of traffic conflicts.

    PubMed

    Abdul Manan, Muhammad Marizwan

    2014-09-01

    This paper uses data from an observational study, conducted at access points in straight sections of primary roads in Malaysia in 2012, to investigate the effects of motorcyclists' behavior and road environment attributes on the occurrence of serious traffic conflicts involving motorcyclists entering primary roads via access points. In order to handle the unobserved heterogeneity in the small sample data size, this study applies mixed effects logistic regression with multilevel bootstrapping. Two statistically significant models (Model 2 and Model 3) are produced, with 2 levels of random effect parameters, i.e. motorcyclists' attributes and behavior at Level 1, and road environment attributes at Level 2. Among all the road environment attributes tested, the traffic volume and the speed limit are found to be statistically significant, only contributing to 26-29% of the variations affecting the traffic conflict outcome. The implication is that 71-74% of the unmeasured or undescribed attributes and behavior of motorcyclists still have an importance in predicting the outcome: a serious traffic conflict. As for the fixed effect parameters, both models show that the risk of motorcyclists being involved in a serious traffic conflict is 2-4 times more likely if they accept a shorter gap to a single approaching vehicle (time lag <4s) and in between two vehicles (time gap <4s) when entering the primary road from the access point. A road environment factor, such as a narrow lane (seen in Model 2), and a behavioral factor, such as stopping at the stop line (seen in Model 3), also influence the occurrence of a serious traffic conflict compared to those entering into a wider lane road and without stopping at the stop line, respectively. A discussion of the possible reasons for this seemingly strange result, including a recommendation for further research, concludes the paper. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. On the intrinsic dynamics of bacteria in waterborne infections.

    PubMed

    Yang, Chayu; Wang, Jin

    2018-02-01

    The intrinsic dynamics of bacteria often play an important role in the transmission and spread of waterborne infectious diseases. In this paper, we construct mathematical models for waterborne infections and analyze two types of nontrivial bacterial dynamics: logistic growth, and growth with Allee effects. For the model with logistic growth, we find that regular threshold dynamics take place, and the basic reproduction number can be used to characterize disease extinction and persistence. In contrast, the model with Allee effects exhibits much more complex dynamics, including the existence of multiple endemic equilibria and the presence of backward bifurcation and forward hysteresis. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Assessing the Effect of an Old and New Methodology for Scale Conversion on Examinee Scores

    ERIC Educational Resources Information Center

    Rizavi, Saba; Smith, Robert; Carey, Jill

    2002-01-01

    Research has been done to look at the benefits of BILOG over LOGIST as well as the potential issues that can arise if transition from LOGIST to BILOG is desired. A serious concern arises when comparability is required between previously calibrated LOGIST parameter estimates and currently calibrated BILOG estimates. It is imperative to obtain an…

  19. Future trends in computer waste generation in India.

    PubMed

    Dwivedy, Maheshwar; Mittal, R K

    2010-11-01

    The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Improving power and robustness for detecting genetic association with extreme-value sampling design.

    PubMed

    Chen, Hua Yun; Li, Mingyao

    2011-12-01

    Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.

Top