Sample records for estimating equations logistic

  1. Logistic Achievement Test Scaling and Equating with Fixed versus Estimated Lower Asymptotes.

    ERIC Educational Resources Information Center

    Phillips, S. E.

    This study compared the lower asymptotes estimated by the maximum likelihood procedures of the LOGIST computer program with those obtained via application of the Norton methodology. The study also compared the equating results from the three-parameter logistic model with those obtained from the equipercentile, Rasch, and conditional…

  2. Sourcing for Parameter Estimation and Study of Logistic Differential Equation

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2012-01-01

    This article offers modelling opportunities in which the phenomena of the spread of disease, perception of changing mass, growth of technology, and dissemination of information can be described by one differential equation--the logistic differential equation. It presents two simulation activities for students to generate real data, as well as…

  3. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  4. Deletion Diagnostics for Alternating Logistic Regressions

    PubMed Central

    Preisser, John S.; By, Kunthel; Perin, Jamie; Qaqish, Bahjat F.

    2013-01-01

    Deletion diagnostics are introduced for the regression analysis of clustered binary outcomes estimated with alternating logistic regressions, an implementation of generalized estimating equations (GEE) that estimates regression coefficients in a marginal mean model and in a model for the intracluster association given by the log odds ratio. The diagnostics are developed within an estimating equations framework that recasts the estimating functions for association parameters based upon conditional residuals into equivalent functions based upon marginal residuals. Extensions of earlier work on GEE diagnostics follow directly, including computational formulae for one-step deletion diagnostics that measure the influence of a cluster of observations on the estimated regression parameters and on the overall marginal mean or association model fit. The diagnostic formulae are evaluated with simulations studies and with an application concerning an assessment of factors associated with health maintenance visits in primary care medical practices. The application and the simulations demonstrate that the proposed cluster-deletion diagnostics for alternating logistic regressions are good approximations of their exact fully iterated counterparts. PMID:22777960

  5. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  6. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  7. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  8. Methods for estimating selected low-flow frequency statistics for unregulated streams in Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Arihood, Leslie D.

    2010-01-01

    This report provides estimates of, and presents methods for estimating, selected low-flow frequency statistics for unregulated streams in Kentucky including the 30-day mean low flows for recurrence intervals of 2 and 5 years (30Q2 and 30Q5) and the 7-day mean low flows for recurrence intervals of 5, 10, and 20 years (7Q2, 7Q10, and 7Q20). Estimates of these statistics are provided for 121 U.S. Geological Survey streamflow-gaging stations with data through the 2006 climate year, which is the 12-month period ending March 31 of each year. Data were screened to identify the periods of homogeneous, unregulated flows for use in the analyses. Logistic-regression equations are presented for estimating the annual probability of the selected low-flow frequency statistics being equal to zero. Weighted-least-squares regression equations were developed for estimating the magnitude of the nonzero 30Q2, 30Q5, 7Q2, 7Q10, and 7Q20 low flows. Three low-flow regions were defined for estimating the 7-day low-flow frequency statistics. The explicit explanatory variables in the regression equations include total drainage area and the mapped streamflow-variability index measured from a revised statewide coverage of this characteristic. The percentage of the station low-flow statistics correctly classified as zero or nonzero by use of the logistic-regression equations ranged from 87.5 to 93.8 percent. The average standard errors of prediction of the weighted-least-squares regression equations ranged from 108 to 226 percent. The 30Q2 regression equations have the smallest standard errors of prediction, and the 7Q20 regression equations have the largest standard errors of prediction. The regression equations are applicable only to stream sites with low flows unaffected by regulation from reservoirs and local diversions of flow and to drainage basins in specified ranges of basin characteristics. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features.

  9. Bayesian Analysis of Nonlinear Structural Equation Models with Nonignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum

    2006-01-01

    A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm is used to produce the joint Bayesian estimates of…

  10. Methods for estimating selected low-flow frequency statistics and harmonic mean flows for streams in Iowa

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.

    2017-01-01

    A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic characteristics, landform regions, and soil regions. A comparison of root mean square errors and average standard errors of prediction for the statewide, regional, and region-of-influence regressions determined that the regional regression provided the best estimates of the seven selected statistics at ungaged sites in Iowa. Because a significant number of streams in Iowa reach zero flow as their minimum flow during low-flow years, four different types of regression analyses were used: left-censored, logistic, generalized-least-squares, and weighted-least-squares regression. A total of 192 streamgages were included in the development of 27 regression equations for the three low-flow regions. For the northeast and northwest regions, a censoring threshold was used to develop 12 left-censored regression equations to estimate the 6 low-flow frequency statistics for each region. For the southern region a total of 12 regression equations were developed; 6 logistic regression equations were developed to estimate the probability of zero flow for the 6 low-flow frequency statistics and 6 generalized least-squares regression equations were developed to estimate the 6 low-flow frequency statistics, if nonzero flow is estimated first by use of the logistic equations. A weighted-least-squares regression equation was developed for each region to estimate the harmonic-mean-flow statistic. Average standard errors of estimate for the left-censored equations for the northeast region range from 64.7 to 88.1 percent and for the northwest region range from 85.8 to 111.8 percent. Misclassification percentages for the logistic equations for the southern region range from 5.6 to 14.0 percent. Average standard errors of prediction for generalized least-squares equations for the southern region range from 71.7 to 98.9 percent and pseudo coefficients of determination for the generalized-least-squares equations range from 87.7 to 91.8 percent. Average standard errors of prediction for weighted-least-squares equations developed for estimating the harmonic-mean-flow statistic for each of the three regions range from 66.4 to 80.4 percent. The regression equations are applicable only to stream sites in Iowa with low flows not significantly affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. If the equations are used at ungaged sites on regulated streams, or on streams affected by water-supply and agricultural withdrawals, then the estimates will need to be adjusted by the amount of regulation or withdrawal to estimate the actual flow conditions if that is of interest. Caution is advised when applying the equations for basins with characteristics near the applicable limits of the equations and for basins located in karst topography. A test of two drainage-area ratio methods using 31 pairs of streamgages, for the annual 7-day mean low-flow statistic for a recurrence interval of 10 years, indicates a weighted drainage-area ratio method provides better estimates than regional regression equations for an ungaged site on a gaged stream in Iowa when the drainage-area ratio is between 0.5 and 1.4. These regression equations will be implemented within the U.S. Geological Survey StreamStats web-based geographic-information-system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the seven selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these seven selected statistics are provided for the streamgage.

  11. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  12. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  13. An Evaluation of Three Approximate Item Response Theory Models for Equating Test Scores.

    ERIC Educational Resources Information Center

    Marco, Gary L.; And Others

    Three item response models were evaluated for estimating item parameters and equating test scores. The models, which approximated the traditional three-parameter model, included: (1) the Rasch one-parameter model, operationalized in the BICAL computer program; (2) an approximate three-parameter logistic model based on coarse group data divided…

  14. MESSOC capabilities and results. [Model for Estimating Space Station Opertions Costs

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    1990-01-01

    MESSOC (Model for Estimating Space Station Operations Costs) is the result of a multi-year effort by NASA to understand and model the mature operations cost of Space Station Freedom. This paper focuses on MESSOC's ability to contribute to life-cycle cost analyses through its logistics equations and databases. Together, these afford MESSOC the capability to project not only annual logistics costs for a variety of Space Station scenarios, but critical non-cost logistics results such as annual Station maintenance crewhours, upweight/downweight, and on-orbit sparing availability as well. MESSOC results using current logistics databases and baseline scenario have already shown important implications for on-orbit maintenance approaches, space transportation systems, and international operations cost sharing.

  15. The use of generalized estimating equations in the analysis of motor vehicle crash data.

    PubMed

    Hutchings, Caroline B; Knight, Stacey; Reading, James C

    2003-01-01

    The purpose of this study was to determine if it is necessary to use generalized estimating equations (GEEs) in the analysis of seat belt effectiveness in preventing injuries in motor vehicle crashes. The 1992 Utah crash dataset was used, excluding crash participants where seat belt use was not appropriate (n=93,633). The model used in the 1996 Report to Congress [Report to congress on benefits of safety belts and motorcycle helmets, based on data from the Crash Outcome Data Evaluation System (CODES). National Center for Statistics and Analysis, NHTSA, Washington, DC, February 1996] was analyzed for all occupants with logistic regression, one level of nesting (occupants within crashes), and two levels of nesting (occupants within vehicles within crashes) to compare the use of GEEs with logistic regression. When using one level of nesting compared to logistic regression, 13 of 16 variance estimates changed more than 10%, and eight of 16 parameter estimates changed more than 10%. In addition, three of the independent variables changed from significant to insignificant (alpha=0.05). With the use of two levels of nesting, two of 16 variance estimates and three of 16 parameter estimates changed more than 10% from the variance and parameter estimates in one level of nesting. One of the independent variables changed from insignificant to significant (alpha=0.05) in the two levels of nesting model; therefore, only two of the independent variables changed from significant to insignificant when the logistic regression model was compared to the two levels of nesting model. The odds ratio of seat belt effectiveness in preventing injuries was 12% lower when a one-level nested model was used. Based on these results, we stress the need to use a nested model and GEEs when analyzing motor vehicle crash data.

  16. A logistic regression equation for estimating the probability of a stream in Vermont having intermittent flow

    USGS Publications Warehouse

    Olson, Scott A.; Brouillette, Michael C.

    2006-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing intermittently at unregulated, rural stream sites in Vermont. These determinations can be used for a wide variety of regulatory and planning efforts at the Federal, State, regional, county and town levels, including such applications as assessing fish and wildlife habitats, wetlands classifications, recreational opportunities, water-supply potential, waste-assimilation capacities, and sediment transport. The equation will be used to create a derived product for the Vermont Hydrography Dataset having the streamflow characteristic of 'intermittent' or 'perennial.' The Vermont Hydrography Dataset is Vermont's implementation of the National Hydrography Dataset and was created at a scale of 1:5,000 based on statewide digital orthophotos. The equation was developed by relating field-verified perennial or intermittent status of a stream site during normal summer low-streamflow conditions in the summer of 2005 to selected basin characteristics of naturally flowing streams in Vermont. The database used to develop the equation included 682 stream sites with drainage areas ranging from 0.05 to 5.0 square miles. When the 682 sites were observed, 126 were intermittent (had no flow at the time of the observation) and 556 were perennial (had flowing water at the time of the observation). The results of the logistic regression analysis indicate that the probability of a stream having intermittent flow in Vermont is a function of drainage area, elevation of the site, the ratio of basin relief to basin perimeter, and the areal percentage of well- and moderately well-drained soils in the basin. Using a probability cutpoint (a lower probability indicates the site has perennial flow and a higher probability indicates the site has intermittent flow) of 0.5, the logistic regression equation correctly predicted the perennial or intermittent status of 116 test sites 85 percent of the time.

  17. Binomial outcomes in dataset with some clusters of size two: can the dependence of twins be accounted for? A simulation study comparing the reliability of statistical methods based on a dataset of preterm infants.

    PubMed

    Sauzet, Odile; Peacock, Janet L

    2017-07-20

    The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.

  18. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  19. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  20. Recalibration of the Klales et al. (2012) method of sexing the human innominate for Mexican populations.

    PubMed

    Gómez-Valdés, Jorge A; Menéndez Garmendia, Antinea; García-Barzola, Lizbeth; Sánchez-Mejorada, Gabriela; Karam, Carlos; Baraybar, José Pablo; Klales, Alexandra

    2017-03-01

    The aim of this study was to test the accuracy of the Klales et al. (2012) equation for sex estimation in contemporary Mexican population. Our investigation was carried out on a sample of 203 left innominates of identified adult skeletons from the UNAM-Collection and the Santa María Xigui Cemetery, in Central Mexico. The Klales' original equation produces a sex bias in sex estimation against males (86-92% accuracy versus 100% accuracy in females). Based on these results, the Klales et al. (2012) method was recalibrated for a new cutt-of-point for sex estimation in contemporary Mexican populations. The results show cross-validated classification accuracy rates as high as 100% after recalibrating the original logistic regression equation. Recalibration improved classification accuracy and eliminated sex bias. This new formula will improve sex estimation for Mexican contemporary populations. © 2017 Wiley Periodicals, Inc.

  1. Stochastic dynamics and logistic population growth

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Assaf, Michael; Campos, Daniel; Horsthemke, Werner

    2015-06-01

    The Verhulst model is probably the best known macroscopic rate equation in population ecology. It depends on two parameters, the intrinsic growth rate and the carrying capacity. These parameters can be estimated for different populations and are related to the reproductive fitness and the competition for limited resources, respectively. We investigate analytically and numerically the simplest possible microscopic scenarios that give rise to the logistic equation in the deterministic mean-field limit. We provide a definition of the two parameters of the Verhulst equation in terms of microscopic parameters. In addition, we derive the conditions for extinction or persistence of the population by employing either the momentum-space spectral theory or the real-space Wentzel-Kramers-Brillouin approximation to determine the probability distribution function and the mean time to extinction of the population. Our analytical results agree well with numerical simulations.

  2. Estimating selected low-flow frequency statistics and harmonic-mean flows for ungaged, unregulated streams in Indiana

    USGS Publications Warehouse

    Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.

    2016-09-06

    Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.

  3. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    PubMed

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. A logistic regression equation for estimating the probability of a stream flowing perennially in Massachusetts

    USGS Publications Warehouse

    Bent, Gardner C.; Archfield, Stacey A.

    2002-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing perennially at a specific site in Massachusetts. The equation provides city and town conservation commissions and the Massachusetts Department of Environmental Protection with an additional method for assessing whether streams are perennial or intermittent at a specific site in Massachusetts. This information is needed to assist these environmental agencies, who administer the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a 200-foot-wide protected riverfront area extending along the length of each side of the stream from the mean annual high-water line along each side of perennial streams, with exceptions in some urban areas. The equation was developed by relating the verified perennial or intermittent status of a stream site to selected basin characteristics of naturally flowing streams (no regulation by dams, surface-water withdrawals, ground-water withdrawals, diversion, waste-water discharge, and so forth) in Massachusetts. Stream sites used in the analysis were identified as perennial or intermittent on the basis of review of measured streamflow at sites throughout Massachusetts and on visual observation at sites in the South Coastal Basin, southeastern Massachusetts. Measured or observed zero flow(s) during months of extended drought as defined by the 310 Code of Massachusetts Regulations (CMR) 10.58(2)(a) were not considered when designating the perennial or intermittent status of a stream site. The database used to develop the equation included a total of 305 stream sites (84 intermittent- and 89 perennial-stream sites in the State, and 50 intermittent- and 82 perennial-stream sites in the South Coastal Basin). Stream sites included in the database had drainage areas that ranged from 0.14 to 8.94 square miles in the State and from 0.02 to 7.00 square miles in the South Coastal Basin.Results of the logistic regression analysis indicate that the probability of a stream flowing perennially at a specific site in Massachusetts can be estimated as a function of (1) drainage area (cube root), (2) drainage density, (3) areal percentage of stratified-drift deposits (square root), (4) mean basin slope, and (5) location in the South Coastal Basin or the remainder of the State. Although the equation developed provides an objective means for estimating the probability of a stream flowing perennially at a specific site, the reliability of the equation is constrained by the data used to develop the equation. The equation may not be reliable for (1) drainage areas less than 0.14 square mile in the State or less than 0.02 square mile in the South Coastal Basin, (2) streams with losing reaches, or (3) streams draining the southern part of the South Coastal Basin and the eastern part of the Buzzards Bay Basin and the entire area of Cape Cod and the Islands Basins.

  5. R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Saputro, Dewi Retno Sari

    2017-03-01

    GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".

  6. An evaluation of three-dimensional photogrammetric and morphometric techniques for estimating volume and mass in Weddell seals Leptonychotes weddellii

    PubMed Central

    Ruscher-Hill, Brandi; Kirkham, Amy L.; Burns, Jennifer M.

    2018-01-01

    Body mass dynamics of animals can indicate critical associations between extrinsic factors and population vital rates. Photogrammetry can be used to estimate mass of individuals in species whose life histories make it logistically difficult to obtain direct body mass measurements. Such studies typically use equations to relate volume estimates from photogrammetry to mass; however, most fail to identify the sources of error between the estimated and actual mass. Our objective was to identify the sources of error that prevent photogrammetric mass estimation from directly predicting actual mass, and develop a methodology to correct this issue. To do this, we obtained mass, body measurements, and scaled photos for 56 sedated Weddell seals (Leptonychotes weddellii). After creating a three-dimensional silhouette in the image processing program PhotoModeler Pro, we used horizontal scale bars to define the ground plane, then removed the below-ground portion of the animal’s estimated silhouette. We then re-calculated body volume and applied an expected density to estimate animal mass. We compared the body mass estimates derived from this silhouette slice method with estimates derived from two other published methodologies: body mass calculated using photogrammetry coupled with a species-specific correction factor, and estimates using elliptical cones and measured tissue densities. The estimated mass values (mean ± standard deviation 345±71 kg for correction equation, 346±75 kg for silhouette slice, 343±76 kg for cones) were not statistically distinguishable from each other or from actual mass (346±73 kg) (ANOVA with Tukey HSD post-hoc, p>0.05 for all pairwise comparisons). We conclude that volume overestimates from photogrammetry are likely due to the inability of photo modeling software to properly render the ventral surface of the animal where it contacts the ground. Due to logistical differences between the “correction equation”, “silhouette slicing”, and “cones” approaches, researchers may find one technique more useful for certain study programs. In combination or exclusively, these three-dimensional mass estimation techniques have great utility in field studies with repeated measures sampling designs or where logistic constraints preclude weighing animals. PMID:29320573

  7. Filtering data from the collaborative initial glaucoma treatment study for improved identification of glaucoma progression.

    PubMed

    Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C

    2013-12-21

    Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.

  8. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  9. Beyond logistic regression: structural equations modelling for binary variables and its application to investigating unobserved confounders.

    PubMed

    Kupek, Emil

    2006-03-15

    Structural equation modelling (SEM) has been increasingly used in medical statistics for solving a system of related regression equations. However, a great obstacle for its wider use has been its difficulty in handling categorical variables within the framework of generalised linear models. A large data set with a known structure among two related outcomes and three independent variables was generated to investigate the use of Yule's transformation of odds ratio (OR) into Q-metric by (OR-1)/(OR+1) to approximate Pearson's correlation coefficients between binary variables whose covariance structure can be further analysed by SEM. Percent of correctly classified events and non-events was compared with the classification obtained by logistic regression. The performance of SEM based on Q-metric was also checked on a small (N = 100) random sample of the data generated and on a real data set. SEM successfully recovered the generated model structure. SEM of real data suggested a significant influence of a latent confounding variable which would have not been detectable by standard logistic regression. SEM classification performance was broadly similar to that of the logistic regression. The analysis of binary data can be greatly enhanced by Yule's transformation of odds ratios into estimated correlation matrix that can be further analysed by SEM. The interpretation of results is aided by expressing them as odds ratios which are the most frequently used measure of effect in medical statistics.

  10. Accuracy and equivalence testing of crown ratio models and assessment of their impact on diameter growth and basal area increment predictions of two variants of the Forest Vegetation Simulator

    Treesearch

    Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston

    2009-01-01

    Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...

  11. Prediction equation for estimating total daily energy requirements of special operations personnel.

    PubMed

    Barringer, N D; Pasiakos, S M; McClung, H L; Crombie, A P; Margolis, L M

    2018-01-01

    Special Operations Forces (SOF) engage in a variety of military tasks with many producing high energy expenditures, leading to undesired energy deficits and loss of body mass. Therefore, the ability to accurately estimate daily energy requirements would be useful for accurate logistical planning. Generate a predictive equation estimating energy requirements of SOF. Retrospective analysis of data collected from SOF personnel engaged in 12 different SOF training scenarios. Energy expenditure and total body water were determined using the doubly-labeled water technique. Physical activity level was determined as daily energy expenditure divided by resting metabolic rate. Physical activity level was broken into quartiles (0 = mission prep, 1 = common warrior tasks, 2 = battle drills, 3 = specialized intense activity) to generate a physical activity factor (PAF). Regression analysis was used to construct two predictive equations (Model A; body mass and PAF, Model B; fat-free mass and PAF) estimating daily energy expenditures. Average measured energy expenditure during SOF training was 4468 (range: 3700 to 6300) Kcal·d- 1 . Regression analysis revealed that physical activity level ( r  = 0.91; P  < 0.05) and body mass ( r  = 0.28; P  < 0.05; Model A), or fat-free mass (FFM; r  = 0.32; P  < 0.05; Model B) were the factors that most highly predicted energy expenditures. Predictive equations coupling PAF with body mass (Model A) and FFM (Model B), were correlated ( r  = 0.74 and r  = 0.76, respectively) and did not differ [mean ± SEM: Model A; 4463 ± 65 Kcal·d - 1 , Model B; 4462 ± 61 Kcal·d - 1 ] from DLW measured energy expenditures. By quantifying and grouping SOF training exercises into activity factors, SOF energy requirements can be predicted with reasonable accuracy and these equations used by dietetic/logistical personnel to plan appropriate feeding regimens to meet SOF nutritional requirements across their mission profile.

  12. Fisher Scoring Method for Parameter Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia

    2017-06-01

    GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.

  13. A revised logistic regression equation and an automated procedure for mapping the probability of a stream flowing perennially in Massachusetts

    USGS Publications Warehouse

    Bent, Gardner C.; Steeves, Peter A.

    2006-01-01

    A revised logistic regression equation and an automated procedure were developed for mapping the probability of a stream flowing perennially in Massachusetts. The equation provides city and town conservation commissions and the Massachusetts Department of Environmental Protection a method for assessing whether streams are intermittent or perennial at a specific site in Massachusetts by estimating the probability of a stream flowing perennially at that site. This information could assist the environmental agencies who administer the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a 200-foot-wide protected riverfront area extending from the mean annual high-water line along each side of a perennial stream, with exceptions for some urban areas. The equation was developed by relating the observed intermittent or perennial status of a stream site to selected basin characteristics of naturally flowing streams (defined as having no regulation by dams, surface-water withdrawals, ground-water withdrawals, diversion, wastewater discharge, and so forth) in Massachusetts. This revised equation differs from the equation developed in a previous U.S. Geological Survey study in that it is solely based on visual observations of the intermittent or perennial status of stream sites across Massachusetts and on the evaluation of several additional basin and land-use characteristics as potential explanatory variables in the logistic regression analysis. The revised equation estimated more accurately the intermittent or perennial status of the observed stream sites than the equation from the previous study. Stream sites used in the analysis were identified as intermittent or perennial based on visual observation during low-flow periods from late July through early September 2001. The database of intermittent and perennial streams included a total of 351 naturally flowing (no regulation) sites, of which 85 were observed to be intermittent and 266 perennial. Stream sites included in the database had drainage areas that ranged from 0.04 to 10.96 square miles. Of the 66 stream sites with drainage areas greater than 2.00 square miles, 2 sites were intermittent and 64 sites were perennial. Thus, stream sites with drainage areas greater than 2.00 square miles were assumed to flow perennially, and the database used to develop the logistic regression equation included only those stream sites with drainage areas less than 2.00 square miles. The database for the equation included 285 stream sites that had drainage areas less than 2.00 square miles, of which 83 sites were intermittent and 202 sites were perennial. Results of the logistic regression analysis indicate that the probability of a stream flowing perennially at a specific site in Massachusetts can be estimated as a function of four explanatory variables: (1) drainage area (natural logarithm), (2) areal percentage of sand and gravel deposits, (3) areal percentage of forest land, and (4) region of the state (eastern region or western region). Although the equation provides an objective means of determining the probability of a stream flowing perennially at a specific site, the reliability of the equation is constrained by the data used in its development. The equation is not recommended for (1) losing stream reaches or (2) streams whose ground-water contributing areas do not coincide with their surface-water drainage areas, such as many streams draining the Southeast Coastal Region-the southern part of the South Coastal Basin, the eastern part of the Buzzards Bay Basin, and the entire area of the Cape Cod and the Islands Basins. If the equation were used on a regulated stream site, the estimated intermittent or perennial status would reflect the natural flow conditions for that site. An automated mapping procedure was developed to determine the intermittent or perennial status of stream sites along reaches throughout a basin. The procedure delineates the drainage area boundaries, determines values for the four explanatory variables, and solves the equation for estimating the probability of a stream flowing perennially at two locations on a headwater (first-order) stream reach-one near its confluence or end point and one near its headwaters or start point. The automated procedure then determines the intermittent or perennial status of the reach on the basis of the calculated probability values and a probability cutpoint (a stream is considered to flow perennially at a cutpoint of 0.56 or greater for this study) for the two locations or continues to loop upstream or downstream between locations less than and greater than the cutpoint of 0.56 to determine the transition point from an intermittent to a perennial stream. If the first-order stream reach is determined to be intermittent, the procedure moves to the next downstream reach and repeats the same process. The automated procedure then moves to the next first-order stream and repeats the process until the entire basin is mapped. A map of the intermittent and perennial stream reaches in the Shawsheen River Basin is provided on a CD-ROM that accompanies this report. The CD-ROM also contains ArcReader 9.0, a freeware product, that allows a user to zoom in and out, set a scale, pan, turn on and off map layers (such as a USGS topographic map), and print a map of the stream site with a scale bar. Maps of the intermittent and perennial stream reaches in Massachusetts will provide city and town conservation commissions and the Massachusetts Department of Environmental Protection with an additional method for assessing the intermittent or perennial status of stream sites.

  14. Comparing the Discrete and Continuous Logistic Models

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  15. Computational tools for fitting the Hill equation to dose-response curves.

    PubMed

    Gadagkar, Sudhindra R; Call, Gerald B

    2015-01-01

    Many biological response curves commonly assume a sigmoidal shape that can be approximated well by means of the 4-parameter nonlinear logistic equation, also called the Hill equation. However, estimation of the Hill equation parameters requires access to commercial software or the ability to write computer code. Here we present two user-friendly and freely available computer programs to fit the Hill equation - a Solver-based Microsoft Excel template and a stand-alone GUI-based "point and click" program, called HEPB. Both computer programs use the iterative method to estimate two of the Hill equation parameters (EC50 and the Hill slope), while constraining the values of the other two parameters (the minimum and maximum asymptotes of the response variable) to fit the Hill equation to the data. In addition, HEPB draws the prediction band at a user-defined confidence level, and determines the EC50 value for each of the limits of this band to give boundary values that help objectively delineate sensitive, normal and resistant responses to the drug being tested. Both programs were tested by analyzing twelve datasets that varied widely in data values, sample size and slope, and were found to yield estimates of the Hill equation parameters that were essentially identical to those provided by commercial software such as GraphPad Prism and nls, the statistical package in the programming language R. The Excel template provides a means to estimate the parameters of the Hill equation and plot the regression line in a familiar Microsoft Office environment. HEPB, in addition to providing the above results, also computes the prediction band for the data at a user-defined level of confidence, and determines objective cut-off values to distinguish among response types (sensitive, normal and resistant). Both programs are found to yield estimated values that are essentially the same as those from standard software such as GraphPad Prism and the R-based nls. Furthermore, HEPB also has the option to simulate 500 response values based on the range of values of the dose variable in the original data and the fit of the Hill equation to that data. Copyright © 2014. Published by Elsevier Inc.

  16. Stochastic growth logistic model with aftereffect for batch fermentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah

    2014-06-19

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  17. Stochastic growth logistic model with aftereffect for batch fermentation process

    NASA Astrophysics Data System (ADS)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md

    2014-06-01

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  18. Order-of-magnitude estimates of latency (time to appearance) and refill time of a cancer from a single cancer 'stem' cell compared by an exponential and a logistic equation.

    PubMed

    Anderson, Ken M; Rubenstein, Marvin; Guinan, Patrick; Patel, Minu

    2012-01-01

    The time required before a mass of cancer cells considered to have originated from a single malignantly transformed cancer 'stem' cell reaches a certain number has not been studied. Applications might include determination of the time the cell mass reaches a size that can be detected by X-rays or physical examination or modeling growth rates in vitro in order to compare with other models or established data. We employed a simple logarithmic equation and a common logistic equation incorporating 'feedback' for unknown variables of cell birth, growth, division, and death that can be used to model cell proliferation. It can be used in association with free or commercial statistical software. Results with these two equations, varying the proliferation rate, nominally reduced by generational cell loss, are presented in two tables. The resulting equation, instructions, examples, and necessary mathematical software are available in the online appendix, where several parameters of interest can be modified by the reader www.uic.edu/nursing/publicationsupplements/tobillion_Anderson_Rubenstein_Guinan_Patel1.pdf. Reducing the proliferation rate by whatever alterations employed, markedly increases the time to reach 10(9) cells originating from an initial progenitor. In thinking about multistep oncogenesis, it is useful to consider the profound effect that variations in the effective proliferation rate may have during cancer development. This can be approached with the proposed equation, which is easy to use and available to further peer fine-tuning to be used in future modeling of cell growth.

  19. Positive solutions to logistic type equations with harvesting

    NASA Astrophysics Data System (ADS)

    Girão, Pedro; Tehrani, Hossein

    We use comparison principles, variational arguments and a truncation method to obtain positive solutions to logistic type equations with harvesting both in R and in a bounded domain Ω⊂R, with N⩾3, when the carrying capacity of the environment is not constant. By relaxing the growth assumption on the coefficients of the differential equation we derive a new equation which is easily solved. The solution of this new equation is then used to produce a positive solution of our original problem.

  20. Estimating irrigation water use in the humid eastern United States

    USGS Publications Warehouse

    Levin, Sara B.; Zarriello, Phillip J.

    2013-01-01

    Accurate accounting of irrigation water use is an important part of the U.S. Geological Survey National Water-Use Information Program and the WaterSMART initiative to help maintain sustainable water resources in the Nation. Irrigation water use in the humid eastern United States is not well characterized because of inadequate reporting and wide variability associated with climate, soils, crops, and farming practices. To better understand irrigation water use in the eastern United States, two types of predictive models were developed and compared by using metered irrigation water-use data for corn, cotton, peanut, and soybean crops in Georgia and turf farms in Rhode Island. Reliable metered irrigation data were limited to these areas. The first predictive model that was developed uses logistic regression to predict the occurrence of irrigation on the basis of antecedent climate conditions. Logistic regression equations were developed for corn, cotton, peanut, and soybean crops by using weekly irrigation water-use data from 36 metered sites in Georgia in 2009 and 2010 and turf farms in Rhode Island from 2000 to 2004. For the weeks when irrigation was predicted to take place, the irrigation water-use volume was estimated by multiplying the average metered irrigation application rate by the irrigated acreage for a given crop. The second predictive model that was developed is a crop-water-demand model that uses a daily soil water balance to estimate the water needs of a crop on a given day based on climate, soil, and plant properties. Crop-water-demand models were developed independently of reported irrigation water-use practices and relied on knowledge of plant properties that are available in the literature. Both modeling approaches require accurate accounting of irrigated area and crop type to estimate total irrigation water use. Water-use estimates from both modeling methods were compared to the metered irrigation data from Rhode Island and Georgia that were used to develop the models as well as two independent validation datasets from Georgia and Virginia that were not used in model development. Irrigation water-use estimates from the logistic regression method more closely matched mean reported irrigation rates than estimates from the crop-water-demand model when compared to the irrigation data used to develop the equations. The root mean squared errors (RMSEs) for the logistic regression estimates of mean annual irrigation ranged from 0.3 to 2.0 inches (in.) for the five crop types; RMSEs for the crop-water-demand models ranged from 1.4 to 3.9 in. However, when the models were applied and compared to the independent validation datasets from southwest Georgia from 2010, and from Virginia from 1999 to 2007, the crop-water-demand model estimates were as good as or better at predicting the mean irrigation volume than the logistic regression models for most crop types. RMSEs for logistic regression estimates of mean annual irrigation ranged from 1.0 to 7.0 in. for validation data from Georgia and from 1.8 to 4.9 in. for validation data from Virginia; RMSEs for crop-water-demand model estimates ranged from 2.1 to 5.8 in. for Georgia data and from 2.0 to 3.9 in. for Virginia data. In general, regression-based models performed better in areas that had quality daily or weekly irrigation data from which the regression equations were developed; however, the regression models were less reliable than the crop-water-demand models when applied outside the area for which they were developed. In most eastern coastal states that do not have quality irrigation data, the crop-water-demand model can be used more reliably. The development of predictive models of irrigation water use in this study was hindered by a lack of quality irrigation data. Many mid-Atlantic and New England states do not require irrigation water use to be reported. A survey of irrigation data from 14 eastern coastal states from Maine to Georgia indicated that, with the exception of the data in Georgia, irrigation data in the states that do require reporting commonly did not contain requisite ancillary information such as irrigated area or crop type, lacked precision, or were at an aggregated temporal scale making them unsuitable for use in the development of predictive models. Confidence in the reliability of either modeling method is affected by uncertainty in the reported data from which the models were developed or validated. Only through additional collection of quality data and further study can the accuracy and uncertainty of irrigation water-use estimates be improved in the humid eastern United States.

  1. A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions

    NASA Astrophysics Data System (ADS)

    Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.

    2016-12-01

    In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.

  2. Measurements of the talus in the assessment of population affinity.

    PubMed

    Bidmos, Mubarak A; Dayal, Manisha R; Adegboye, Oyelola A

    2018-06-01

    As part of their routine work, forensic anthropologists are expected to report population affinity as part of the biological profile of an individual. The skull is the most widely used bone for the estimation of population affinity but it is not always present in a forensic case. Thus, other bones that preserve well have been shown to give a good indication of either the sex or population affinity of an individual. In this study, the potential of measurements of the talus was investigated for the purpose of estimating population affinity in South Africans. Nine measurements from two hundred and twenty tali of South African Africans (SAA) and South African Whites (SAW) from the Raymond A. Dart Collection of Human Skeletons were used. Direct and step-wise discriminant function and logistic regression analyses were carried out using SPSS and SAS. Talar length was the best single variable for discriminating between these two groups for males while in females the head height was the best single predictor. Average accuracies for correct population affinity classification using logistic regression analysis were higher than those obtained from discriminant function analysis. This study was the first of its type to employ discriminant function analyses and logistic regression analyses to estimate the population affinity of an individual from the talus. Thus these equations can now be used by South African anthropologists when estimating the population affinity of dismembered or damaged or incomplete skeletal remains of SAA and SAW. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Scale-invariance underlying the logistic equation and its social applications

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Plastino, A.

    2013-01-01

    On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.

  4. A local equation for differential diagnosis of β-thalassemia trait and iron deficiency anemia by logistic regression analysis in Southeast Iran.

    PubMed

    Sargolzaie, Narjes; Miri-Moghaddam, Ebrahim

    2014-01-01

    The most common differential diagnosis of β-thalassemia (β-thal) trait is iron deficiency anemia. Several red blood cell equations were introduced during different studies for differential diagnosis between β-thal trait and iron deficiency anemia. Due to genetic variations in different regions, these equations cannot be useful in all population. The aim of this study was to determine a native equation with high accuracy for differential diagnosis of β-thal trait and iron deficiency anemia for the Sistan and Baluchestan population by logistic regression analysis. We selected 77 iron deficiency anemia and 100 β-thal trait cases. We used binary logistic regression analysis and determined best equations for probability prediction of β-thal trait against iron deficiency anemia in our population. We compared diagnostic values and receiver operative characteristic (ROC) curve related to this equation and another 10 published equations in discriminating β-thal trait and iron deficiency anemia. The binary logistic regression analysis determined the best equation for best probability prediction of β-thal trait against iron deficiency anemia with area under curve (AUC) 0.998. Based on ROC curves and AUC, Green & King, England & Frazer, and then Sirdah indices, respectively, had the most accuracy after our equation. We suggest that to get the best equation and cut-off in each region, one needs to evaluate specific information of each region, specifically in areas where populations are homogeneous, to provide a specific formula for differentiating between β-thal trait and iron deficiency anemia.

  5. Least Squares Method for Equating Logistic Ability Scales: A General Approach and Evaluation. Iowa Testing Programs Occasional Papers, Number 30.

    ERIC Educational Resources Information Center

    Haebara, Tomokazu

    When several ability scales in item response models are separately derived from different test forms administered to different samples of examinees, these scales must be equated to a common scale because their units and origins are arbitrarily determined and generally different from scale to scale. A general method for equating logistic ability…

  6. Peak oxygen consumption measured during the stair-climbing test in lung resection candidates.

    PubMed

    Brunelli, Alessandro; Xiumé, Francesco; Refai, Majed; Salati, Michele; Di Nunzio, Luca; Pompili, Cecilia; Sabbatini, Armando

    2010-01-01

    The stair-climbing test is commonly used in the preoperative evaluation of lung resection candidates, but it is difficult to standardize and provides little physiologic information on the performance. To verify the association between the altitude and the V(O2peak) measured during the stair-climbing test. 109 consecutive candidates for lung resection performed a symptom-limited stair-climbing test with direct breath-by-breath measurement of V(O2peak) by a portable gas analyzer. Stepwise logistic regression and bootstrap analyses were used to verify the association of several perioperative variables with a V(O2peak) <15 ml/kg/min. Subsequently, multiple regression analysis was also performed to develop an equation to estimate V(O2peak) from stair-climbing parameters and other patient-related variables. 56% of patients climbing <14 m had a V(O2peak) <15 ml/kg/min, whereas 98% of those climbing >22 m had a V(O2peak) >15 ml/kg/min. The altitude reached at stair-climbing test resulted in the only significant predictor of a V(O2peak) <15 ml/kg/min after logistic regression analysis. Multiple regression analysis yielded an equation to estimate V(O2peak) factoring altitude (p < 0.0001), speed of ascent (p = 0.005) and body mass index (p = 0.0008). There was an association between altitude and V(O2peak) measured during the stair-climbing test. Most of the patients climbing more than 22 m are able to generate high values of V(O2peak) and can proceed to surgery without any additional tests. All others need to be referred for a formal cardiopulmonary exercise test. In addition, we were able to generate an equation to estimate V(O2peak), which could assist in streamlining the preoperative workup and could be used across different settings to standardize this test. Copyright (c) 2010 S. Karger AG, Basel.

  7. A stochastic model for the normal tissue complication probability (NTCP) and applicationss.

    PubMed

    Stocks, Theresa; Hillen, Thomas; Gong, Jiafen; Burger, Martin

    2017-12-11

    The normal tissue complication probability (NTCP) is a measure for the estimated side effects of a given radiation treatment schedule. Here we use a stochastic logistic birth-death process to define an organ-specific and patient-specific NTCP. We emphasize an asymptotic simplification which relates the NTCP to the solution of a logistic differential equation. This framework is based on simple modelling assumptions and it prepares a framework for the use of the NTCP model in clinical practice. As example, we consider side effects of prostate cancer brachytherapy such as increase in urinal frequency, urinal retention and acute rectal dysfunction. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  8. The use of the logistic model in space motion sickness prediction

    NASA Technical Reports Server (NTRS)

    Lin, Karl K.; Reschke, Millard F.

    1987-01-01

    The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.

  9. Development of a restricted state space stochastic differential equation model for bacterial growth in rich media.

    PubMed

    Møller, Jan Kloppenborg; Bergmann, Kirsten Riber; Christiansen, Lasse Engbo; Madsen, Henrik

    2012-07-21

    In the present study, bacterial growth in a rich media is analysed in a Stochastic Differential Equation (SDE) framework. It is demonstrated that the SDE formulation and smoothened state estimates provide a systematic framework for data driven model improvements, using random walk hidden states. Bacterial growth is limited by the available substrate and the inclusion of diffusion must obey this natural restriction. By inclusion of a modified logistic diffusion term it is possible to introduce a diffusion term flexible enough to capture both the growth phase and the stationary phase, while concentration is restricted to the natural state space (substrate and bacteria non-negative). The case considered is the growth of Salmonella and Enterococcus in a rich media. It is found that a hidden state is necessary to capture the lag phase of growth, and that a flexible logistic diffusion term is needed to capture the random behaviour of the growth model. Further, it is concluded that the Monod effect is not needed to capture the dynamics of bacterial growth in the data presented. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Epidemiological characteristics of reported sporadic and outbreak cases of E. coli O157 in people from Alberta, Canada (2000-2002): methodological challenges of comparing clustered to unclustered data.

    PubMed

    Pearl, D L; Louie, M; Chui, L; Doré, K; Grimsrud, K M; Martin, S W; Michel, P; Svenson, L W; McEwen, S A

    2008-04-01

    Using multivariable models, we compared whether there were significant differences between reported outbreak and sporadic cases in terms of their sex, age, and mode and site of disease transmission. We also determined the potential role of administrative, temporal, and spatial factors within these models. We compared a variety of approaches to account for clustering of cases in outbreaks including weighted logistic regression, random effects models, general estimating equations, robust variance estimates, and the random selection of one case from each outbreak. Age and mode of transmission were the only epidemiologically and statistically significant covariates in our final models using the above approaches. Weighing observations in a logistic regression model by the inverse of their outbreak size appeared to be a relatively robust and valid means for modelling these data. Some analytical techniques, designed to account for clustering, had difficulty converging or producing realistic measures of association.

  11. Estimation of sex and stature using anthropometry of the upper extremity in an Australian population.

    PubMed

    Howley, Donna; Howley, Peter; Oxenham, Marc F

    2018-06-01

    Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Using phenomenological models for forecasting the 2015 Ebola challenge.

    PubMed

    Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo

    2018-03-01

    The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. It’s Time to Take the Chill Out of Cost Containment and Re-Energize a Key Acquisition Practice

    DTIC Science & Technology

    2010-04-01

    Wright had already created cost estimating equations to predict the cost of airplanes over long production runs ( Hamaker , 1994). Oddly enough, many are...acquisition outcomes (GAO 06-66). Washington DC: U.S. Government Printing Office. Hamaker , J. (1994). But what will it cost? The history of NASA cost...from http://cost.jsc.nasa.gov/ hamaker . html Kobren, B. (2009). Shaping the life cycle logistics workforce to achieve desired sustainment outcomes

  14. The usefulness of "corrected" body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data.

    PubMed

    Dutton, Daniel J; McLaren, Lindsay

    2014-05-06

    National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18-65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23-28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association.

  15. On the effects of nonlinear boundary conditions in diffusive logistic equations on bounded domains

    NASA Astrophysics Data System (ADS)

    Cantrell, Robert Stephen; Cosner, Chris

    We study a diffusive logistic equation with nonlinear boundary conditions. The equation arises as a model for a population that grows logistically inside a patch and crosses the patch boundary at a rate that depends on the population density. Specifically, the rate at which the population crosses the boundary is assumed to decrease as the density of the population increases. The model is motivated by empirical work on the Glanville fritillary butterfly. We derive local and global bifurcation results which show that the model can have multiple equilibria and in some parameter ranges can support Allee effects. The analysis leads to eigenvalue problems with nonstandard boundary conditions.

  16. Development of the Integrated Biomass Supply Analysis and Logistics Model (IBSAL)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, Shahabaddine; Webb, Erin; Turhollow Jr, Anthony F

    2008-06-01

    The Integrated Biomass Supply & Logistics (IBSAL) model is a dynamic (time dependent) model of operations that involve collection, harvest, storage, preprocessing, and transportation of feedstock for use at a biorefinery. The model uses mathematical equations to represent individual unit operations. These unit operations can be assembled by the user to represent the working rate of equipment and queues to represent storage at facilities. The model calculates itemized costs, energy input, and carbon emissions. It estimates resource requirements and operational characteristics of the entire supply infrastructure. Weather plays an important role in biomass management and thus in IBSAL, dictating themore » moisture content of biomass and whether or not it can be harvested on a given day. The model calculates net biomass yield based on a soil conservation allowance (for crop residue) and dry matter losses during harvest and storage. This publication outlines the development of the model and provides examples of corn stover harvest and logistics.« less

  17. Bernoulli-Langevin Wind Speed Model for Simulation of Storm Events

    NASA Astrophysics Data System (ADS)

    Fürstenau, Norbert; Mittendorf, Monika

    2016-12-01

    We present a simple nonlinear dynamics Langevin model for predicting the instationary wind speed profile during storm events typically accompanying extreme low-pressure situations. It is based on a second-degree Bernoulli equation with δ-correlated Gaussian noise and may complement stationary stochastic wind models. Transition between increasing and decreasing wind speed and (quasi) stationary normal wind and storm states are induced by the sign change of the controlling time-dependent rate parameter k(t). This approach corresponds to the simplified nonlinear laser dynamics for the incoherent to coherent transition of light emission that can be understood by a phase transition analogy within equilibrium thermodynamics [H. Haken, Synergetics, 3rd ed., Springer, Berlin, Heidelberg, New York 1983/2004.]. Evidence for the nonlinear dynamics two-state approach is generated by fitting of two historical wind speed profiles (low-pressure situations "Xaver" and "Christian", 2013) taken from Meteorological Terminal Air Report weather data, with a logistic approximation (i.e. constant rate coefficients k) to the solution of our dynamical model using a sum of sigmoid functions. The analytical solution of our dynamical two-state Bernoulli equation as obtained with a sinusoidal rate ansatz k(t) of period T (=storm duration) exhibits reasonable agreement with the logistic fit to the empirical data. Noise parameter estimates of speed fluctuations are derived from empirical fit residuals and by means of a stationary solution of the corresponding Fokker-Planck equation. Numerical simulations with the Bernoulli-Langevin equation demonstrate the potential for stochastic wind speed profile modeling and predictive filtering under extreme storm events that is suggested for applications in anticipative air traffic management.

  18. About Global Stable of Solutions of Logistic Equation with Delay

    NASA Astrophysics Data System (ADS)

    Kaschenko, S. A.; Loginov, D. O.

    2017-12-01

    The article is devoted to the definition of all the arguments for which all positive solutions of logistic equation with delay tend to zero for t → ∞. The authors have proved the acquainted Wright’s conjecture on evaluation of a multitude of such arguments. An approach that enables subsequent refinement of this evaluation has been developed.

  19. Score Equating and Item Response Theory: Some Practical Considerations.

    ERIC Educational Resources Information Center

    Cook, Linda L.; Eignor, Daniel R.

    The purposes of this paper are five-fold to discuss: (1) when item response theory (IRT) equating methods should provide better results than traditional methods; (2) which IRT model, the three-parameter logistic or the one-parameter logistic (Rasch), is the most reasonable to use; (3) what unique contributions IRT methods can offer the equating…

  20. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  1. Effect of Initial Conditions on Reproducibility of Scientific Research

    PubMed Central

    Djulbegovic, Benjamin; Hozo, Iztok

    2014-01-01

    Background: It is estimated that about half of currently published research cannot be reproduced. Many reasons have been offered as explanations for failure to reproduce scientific research findings- from fraud to the issues related to design, conduct, analysis, or publishing scientific research. We also postulate a sensitive dependency on initial conditions by which small changes can result in the large differences in the research findings when attempted to be reproduced at later times. Methods: We employed a simple logistic regression equation to model the effect of covariates on the initial study findings. We then fed the input from the logistic equation into a logistic map function to model stability of the results in repeated experiments over time. We illustrate the approach by modeling effects of different factors on the choice of correct treatment. Results: We found that reproducibility of the study findings depended both on the initial values of all independent variables and the rate of change in the baseline conditions, the latter being more important. When the changes in the baseline conditions vary by about 3.5 to about 4 in between experiments, no research findings could be reproduced. However, when the rate of change between the experiments is ≤2.5 the results become highly predictable between the experiments. Conclusions: Many results cannot be reproduced because of the changes in the initial conditions between the experiments. Better control of the baseline conditions in-between the experiments may help improve reproducibility of scientific findings. PMID:25132705

  2. Effect of initial conditions on reproducibility of scientific research.

    PubMed

    Djulbegovic, Benjamin; Hozo, Iztok

    2014-06-01

    It is estimated that about half of currently published research cannot be reproduced. Many reasons have been offered as explanations for failure to reproduce scientific research findings- from fraud to the issues related to design, conduct, analysis, or publishing scientific research. We also postulate a sensitive dependency on initial conditions by which small changes can result in the large differences in the research findings when attempted to be reproduced at later times. We employed a simple logistic regression equation to model the effect of covariates on the initial study findings. We then fed the input from the logistic equation into a logistic map function to model stability of the results in repeated experiments over time. We illustrate the approach by modeling effects of different factors on the choice of correct treatment. We found that reproducibility of the study findings depended both on the initial values of all independent variables and the rate of change in the baseline conditions, the latter being more important. When the changes in the baseline conditions vary by about 3.5 to about 4 in between experiments, no research findings could be reproduced. However, when the rate of change between the experiments is ≤2.5 the results become highly predictable between the experiments. Many results cannot be reproduced because of the changes in the initial conditions between the experiments. Better control of the baseline conditions in-between the experiments may help improve reproducibility of scientific findings.

  3. The usefulness of “corrected” body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data

    PubMed Central

    2014-01-01

    Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association. PMID:24885210

  4. Multivariate logistic regression analysis of postoperative complications and risk model establishment of gastrectomy for gastric cancer: A single-center cohort report.

    PubMed

    Zhou, Jinzhe; Zhou, Yanbing; Cao, Shougen; Li, Shikuan; Wang, Hao; Niu, Zhaojian; Chen, Dong; Wang, Dongsheng; Lv, Liang; Zhang, Jian; Li, Yu; Jiao, Xuelong; Tan, Xiaojie; Zhang, Jianli; Wang, Haibo; Zhang, Bingyuan; Lu, Yun; Sun, Zhenqing

    2016-01-01

    Reporting of surgical complications is common, but few provide information about the severity and estimate risk factors of complications. If have, but lack of specificity. We retrospectively analyzed data on 2795 gastric cancer patients underwent surgical procedure at the Affiliated Hospital of Qingdao University between June 2007 and June 2012, established multivariate logistic regression model to predictive risk factors related to the postoperative complications according to the Clavien-Dindo classification system. Twenty-four out of 86 variables were identified statistically significant in univariate logistic regression analysis, 11 significant variables entered multivariate analysis were employed to produce the risk model. Liver cirrhosis, diabetes mellitus, Child classification, invasion of neighboring organs, combined resection, introperative transfusion, Billroth II anastomosis of reconstruction, malnutrition, surgical volume of surgeons, operating time and age were independent risk factors for postoperative complications after gastrectomy. Based on logistic regression equation, p=Exp∑BiXi / (1+Exp∑BiXi), multivariate logistic regression predictive model that calculated the risk of postoperative morbidity was developed, p = 1/(1 + e((4.810-1.287X1-0.504X2-0.500X3-0.474X4-0.405X5-0.318X6-0.316X7-0.305X8-0.278X9-0.255X10-0.138X11))). The accuracy, sensitivity and specificity of the model to predict the postoperative complications were 86.7%, 76.2% and 88.6%, respectively. This risk model based on Clavien-Dindo grading severity of complications system and logistic regression analysis can predict severe morbidity specific to an individual patient's risk factors, estimate patients' risks and benefits of gastric surgery as an accurate decision-making tool and may serve as a template for the development of risk models for other surgical groups.

  5. Asymptotic behavior of degenerate logistic equations

    NASA Astrophysics Data System (ADS)

    Arrieta, José M.; Pardo, Rosa; Rodríguez-Bernal, Aníbal

    2015-12-01

    We analyze the asymptotic behavior of positive solutions of parabolic equations with a class of degenerate logistic nonlinearities of the type λu - n (x)uρ. An important characteristic of this work is that the region where the logistic term n (ṡ) vanishes, that is K0 = { x : n (x) = 0 }, may be non-smooth. We analyze conditions on λ, ρ, n (ṡ) and K0 guaranteeing that the solution starting at a positive initial condition remains bounded or blows up as time goes to infinity. The asymptotic behavior may not be the same in different parts of K0.

  6. Methods for estimating selected low-flow statistics and development of annual flow-duration statistics for Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Kula, Stephanie P.

    2013-01-01

    This report presents the results of a study to develop methods for estimating selected low-flow statistics and for determining annual flow-duration statistics for Ohio streams. Regression techniques were used to develop equations for estimating 10-year recurrence-interval (10-percent annual-nonexceedance probability) low-flow yields, in cubic feet per second per square mile, with averaging periods of 1, 7, 30, and 90-day(s), and for estimating the yield corresponding to the long-term 80-percent duration flow. These equations, which estimate low-flow yields as a function of a streamflow-variability index, are based on previously published low-flow statistics for 79 long-term continuous-record streamgages with at least 10 years of data collected through water year 1997. When applied to the calibration dataset, average absolute percent errors for the regression equations ranged from 15.8 to 42.0 percent. The regression results have been incorporated into the U.S. Geological Survey (USGS) StreamStats application for Ohio (http://water.usgs.gov/osw/streamstats/ohio.html) in the form of a yield grid to facilitate estimation of the corresponding streamflow statistics in cubic feet per second. Logistic-regression equations also were developed and incorporated into the USGS StreamStats application for Ohio for selected low-flow statistics to help identify occurrences of zero-valued statistics. Quantiles of daily and 7-day mean streamflows were determined for annual and annual-seasonal (September–November) periods for each complete climatic year of streamflow-gaging station record for 110 selected streamflow-gaging stations with 20 or more years of record. The quantiles determined for each climatic year were the 99-, 98-, 95-, 90-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, 2-, and 1-percent exceedance streamflows. Selected exceedance percentiles of the annual-exceedance percentiles were subsequently computed and tabulated to help facilitate consideration of the annual risk of exceedance or nonexceedance of annual and annual-seasonal-period flow-duration values. The quantiles are based on streamflow data collected through climatic year 2008.

  7. Uniqueness of boundary blow-up solutions on exterior domain of RN

    NASA Astrophysics Data System (ADS)

    Dong, Wei; Pang, Changci

    2007-06-01

    In this paper, we consider the existence and uniqueness of positive solutions of the degenerate logistic type elliptic equation where N[greater-or-equal, slanted]2, D[subset of]RN is a bounded domain with smooth boundary and a(x), b(x) are continuous functions on RN with b(x)[greater-or-equal, slanted]0, b(x)[not identical with]0. We show that under rather general conditions on a(x) and b(x) for large x, there exists a unique positive solution. Our results improve the corresponding ones in [W. Dong, Y. Du, Unbounded principal eigenfunctions and the logistic equation on RN, Bull. Austral. Math. Soc. 67 (2003) 413-427] and [Y. Du, L. Ma, Logistic type equations on RN by a squeezing method involving boundary blow-up solutions, J. London Math. Soc. (2) 64 (2001) 107-124].

  8. The Effect of Repeaters on Equating

    ERIC Educational Resources Information Center

    Kim, HeeKyoung; Kolen, Michael J.

    2010-01-01

    Test equating might be affected by including in the equating analyses examinees who have taken the test previously. This study evaluated the effect of including such repeaters on Medical College Admission Test (MCAT) equating using a population invariance approach. Three-parameter logistic (3-PL) item response theory (IRT) true score and…

  9. Comparing the IRT Pre-equating and Section Pre-equating: A Simulation Study.

    ERIC Educational Resources Information Center

    Hwang, Chi-en; Cleary, T. Anne

    The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…

  10. Inability to access addiction treatment among street-involved youth in a Canadian setting.

    PubMed

    Phillips, Mark; DeBeck, Kora; Desjarlais, Timothy; Morrison, Tracey; Feng, Cindy; Kerr, Thomas; Wood, Evan

    2014-08-01

    From Sept 2005 to May 2012, 1015 street-involved youth were enrolled into the At-Risk Youth Study, a prospective cohort of youth aged 14-26 who use illicit drugs in Vancouver, Canada. Data were collected through semiannual interviewer administered questionnaires. Generalized estimating equation logistic regression was used to identify factors independently associated with being unable to access addiction treatment. The enclosed manuscript notes the implications and limitations of this study, as well as possible directions for future research. This study was funded by the US National Institutes of Health (NIH) and Canadian Institutes of Health (CIHR).

  11. Biodegradation kinetics of thin-stillage treatment by Aspergillus awamori and characterization of recovered chitosan.

    PubMed

    Ray, S Ghosh; Ghangrekar, M M

    2016-02-01

    An attempt has been made to provide solution for distillery wastewater using fungal pretreatment followed by an anaerobic process to achieve higher organic matter removal, which is a challenge at present with currently adopted technologies. Submerged growth kinetics of distillery wastewater supernatant by Aspergillus awamori was also evaluated. The proposed kinetic models using a logistic equation for fungal growth and the Leudeking-Piret equation for product formation were validated experimentally, and substrate consumption equation was derived using estimated kinetic coefficients. Up to 59.6 % chemical oxygen demand (COD) and 70 % total organic carbon (TOC) removals were observed in 96 h of fungal incubation. Maximum specific growth rate of fungi, coefficient of biomass yield on substrate and growth-associated product formation coefficient were estimated to be 0.07 ± 0.01 h(-1), 0.614 kg biomass/kg utilized COD and 0.215 kg CO2/kg utilized TOC, respectively. The chitosan recovery of 0.072-0.078 kg/kg of dry mycelium was obtained using dilute sulphuric acid extraction, showing high purity and characteristic chitosan properties according to FTIR and XRD analyses. After anaerobic treatment of the fungal pretreated effluent with COD concentration of 7.920 ± 0.120 kg COD/m(3) (organic loading rate of 3.28 kg COD/m(3) day), overall COD reduction of 91.07 % was achieved from distillery wastewater.

  12. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  13. Numerical solution of a logistic growth model for a population with Allee effect considering fuzzy initial values and fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Amarti, Z.; Nurkholipah, N. S.; Anggriani, N.; Supriatna, A. K.

    2018-03-01

    Predicting the future of population number is among the important factors that affect the consideration in preparing a good management for the population. This has been done by various known method, one among them is by developing a mathematical model describing the growth of the population. The model usually takes form in a differential equation or a system of differential equations, depending on the complexity of the underlying properties of the population. The most widely used growth models currently are those having a sigmoid solution of time series, including the Verhulst logistic equation and the Gompertz equation. In this paper we consider the Allee effect of the Verhulst’s logistic population model. The Allee effect is a phenomenon in biology showing a high correlation between population size or density and the mean individual fitness of the population. The method used to derive the solution is the Runge-Kutta numerical scheme, since it is in general regarded as one among the good numerical scheme which is relatively easy to implement. Further exploration is done via the fuzzy theoretical approach to accommodate the impreciseness of the initial values and parameters in the model.

  14. Behavioral problems and the occurrence of tobacco, cannabis, and coca paste smoking in Chile: evidence based on multivariate response models for school survey data.

    PubMed

    Caris, Luis; Anthony, Christopher B; Ríos-Bedoya, Carlos F; Anthony, James C

    2009-09-01

    In this study we estimate suspected links between youthful behavioral problems and smoking of tobacco, cannabis, and coca paste. In the Republic of Chile, school-attending youths were sampled from all 13 regions of the country, with sample size of 46,907 youths from 8th to 12th grades. A Generalized Estimating Equations (GEE) approach to multiple logistic regression was used to address three interdependent response variables, tobacco smoking, cannabis smoking, and coca paste smoking, and to estimate associations. Drug-specific adjusted slope estimates indicate that youths at the highest levels of behavioral problems are an estimated 1.1 times more likely to have started smoking tobacco, an estimated 1.6 times more likely to have started cannabis smoking, and an estimated 2.0 times more likely to have started coca paste smoking, as compared to youths at the lowest level of behavioral problems (p<0.001). In Chile, there is an association linking behavioral problems with onsets of smoking tobacco and cannabis, as well as coca paste; strength of association is modestly greater for coca paste smoking.

  15. Density-dependence as a size-independent regulatory mechanism.

    PubMed

    de Vladar, Harold P

    2006-01-21

    The growth function of populations is central in biomathematics. The main dogma is the existence of density-dependence mechanisms, which can be modelled with distinct functional forms that depend on the size of the population. One important class of regulatory functions is the theta-logistic, which generalizes the logistic equation. Using this model as a motivation, this paper introduces a simple dynamical reformulation that generalizes many growth functions. The reformulation consists of two equations, one for population size, and one for the growth rate. Furthermore, the model shows that although population is density-dependent, the dynamics of the growth rate does not depend either on population size, nor on the carrying capacity. Actually, the growth equation is uncoupled from the population size equation, and the model has only two parameters, a Malthusian parameter rho and a competition coefficient theta. Distinct sign combinations of these parameters reproduce not only the family of theta-logistics, but also the van Bertalanffy, Gompertz and Potential Growth equations, among other possibilities. It is also shown that, except for two critical points, there is a general size-scaling relation that includes those appearing in the most important allometric theories, including the recently proposed Metabolic Theory of Ecology. With this model, several issues of general interest are discussed such as the growth of animal population, extinctions, cell growth and allometry, and the effect of environment over a population.

  16. Association Between Depression and Elder Abuse and the Mediation of Social Support: A Cross-Sectional Study of Elder Females in Mexico City.

    PubMed

    Vilar-Compte, Mireya; Giraldo-Rodríguez, Liliana; Ochoa-Laginas, Adriana; Gaitan-Rossi, Pablo

    2018-04-01

    We assessed the association between depression and elder abuse, and the mediation effect of social support among elder women in Mexico City. A total of 526 noninstitutionalized elder women, residing in Mexico City and attending public community centers were selected. Logistic regressions and structural equation models (SEM) were estimated. One fifth of the elderly women were at risk of depression, one third suffered some type of abuse in the past 12 months, and 82% reported low social support. Logistic models confirmed that depression was statistically associated with elder abuse and vice versa (odds ratio [OR] = 1.97 and 1.96, respectively). In both models, social support significantly reduced the association between these variables leading to study these associations through SEM. This approach highlighted that social support buffers the association between depression and elder abuse. Findings underline the relevance of programs and strategies targeted at increasing social support among urban older adults.

  17. Logistic Risk Model for the Unique Effects of Inherent Aerobic Capacity on (+)G(sub z) Tolerance Before and After Simulated Weightlessness

    NASA Technical Reports Server (NTRS)

    Ludwig, David A.; Convertino, Victor A.; Goldwater, Danielle J.; Sandler, Harold

    1987-01-01

    Small sample size (n less than 1O) and inappropriate analysis of multivariate data have hindered previous attempts to describe which physiologic and demographic variables are most important in determining how long humans can tolerate acceleration. Data from previous centrifuge studies conducted at NASA/Ames Research Center, utilizing a 7-14 d bed rest protocol to simulate weightlessness, were included in the current investigation. After review, data on 25 women and 22 men were available for analysis. Study variables included gender, age, weight, height, percent body fat, resting heart rate, mean arterial pressure, Vo(sub 2)max and plasma volume. Since the dependent variable was time to greyout (failure), two contemporary biostatistical modeling procedures (proportional hazard and logistic discriminant function) were used to estimate risk, given a particular subject's profile. After adjusting for pro-bed-rest tolerance time, none of the profile variables remained in the risk equation for post-bed-rest tolerance greyout. However, prior to bed rest, risk of greyout could be predicted with 91% accuracy. All of the profile variables except weight, MAP, and those related to inherent aerobic capacity (Vo(sub 2)max, percent body fat, resting heart rate) entered the risk equation for pro-bed-rest greyout. A cross-validation using 24 new subjects indicated a very stable model for risk prediction, accurate within 5% of the original equation. The result for the inherent fitness variables is significant in that a consensus as to whether an increased aerobic capacity is beneficial or detrimental has not been satisfactorily established. We conclude that tolerance to +Gz acceleration before and after simulated weightlessness is independent of inherent aerobic fitness.

  18. Modeling statistics and kinetics of the natural aggregation structures and processes with the solution of generalized logistic equation

    NASA Astrophysics Data System (ADS)

    Maslov, Lev A.; Chebotarev, Vladimir I.

    2017-02-01

    The generalized logistic equation is proposed to model kinetics and statistics of natural processes such as earthquakes, forest fires, floods, landslides, and many others. This equation has the form dN(A)/dA = s dot (1-N(A)) dot N(A)q dot A-α, q>0q>0 and A>0A>0 is the size of an element of a structure, and α≥0. The equation contains two exponents α and q taking into account two important properties of elements of a system: their fractal geometry, and their ability to interact either to enhance or to damp the process of aggregation. The function N(A)N(A) can be understood as an approximation to the number of elements the size of which is less than AA. The function dN(A)/dAdN(A)/dA where N(A)N(A) is the general solution of this equation for q=1 is a product of an increasing bounded function and power-law function with stretched exponential cut-off. The relation with Tsallis non-extensive statistics is demonstrated by solving the generalized logistic equation for q>0q>0. In the case 01q>1 it models sub-additive structures. The Gutenberg-Richter (G-R) formula results from interpretation of empirical data as a straight line in the area of stretched exponent with small α. The solution is applied for modeling distribution of foreshocks and aftershocks in the regions of Napa Valley 2014, and Sumatra 2004 earthquakes fitting the observed data well, both qualitatively and quantitatively.

  19. Semistable extremal ground states for nonlinear evolution equations in unbounded domains

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bernal, Aníbal; Vidal-López, Alejandro

    2008-02-01

    In this paper we show that dissipative reaction-diffusion equations in unbounded domains posses extremal semistable ground states equilibria, which bound asymptotically the global dynamics. Uniqueness of such positive ground state and their approximation by extremal equilibria in bounded domains is also studied. The results are then applied to the important case of logistic equations.

  20. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  1. A general equation to obtain multiple cut-off scores on a test from multinomial logistic regression.

    PubMed

    Bersabé, Rosa; Rivas, Teresa

    2010-05-01

    The authors derive a general equation to compute multiple cut-offs on a total test score in order to classify individuals into more than two ordinal categories. The equation is derived from the multinomial logistic regression (MLR) model, which is an extension of the binary logistic regression (BLR) model to accommodate polytomous outcome variables. From this analytical procedure, cut-off scores are established at the test score (the predictor variable) at which an individual is as likely to be in category j as in category j+1 of an ordinal outcome variable. The application of the complete procedure is illustrated by an example with data from an actual study on eating disorders. In this example, two cut-off scores on the Eating Attitudes Test (EAT-26) scores are obtained in order to classify individuals into three ordinal categories: asymptomatic, symptomatic and eating disorder. Diagnoses were made from the responses to a self-report (Q-EDD) that operationalises DSM-IV criteria for eating disorders. Alternatives to the MLR model to set multiple cut-off scores are discussed.

  2. Classical conditioning through auditory stimuli in Drosophila: methods and models

    PubMed Central

    Menda, Gil; Bar, Haim Y.; Arthur, Ben J.; Rivlin, Patricia K.; Wyttenbach, Robert A.; Strawderman, Robert L.; Hoy, Ronald R.

    2011-01-01

    SUMMARY The role of sound in Drosophila melanogaster courtship, along with its perception via the antennae, is well established, as is the ability of this fly to learn in classical conditioning protocols. Here, we demonstrate that a neutral acoustic stimulus paired with a sucrose reward can be used to condition the proboscis-extension reflex, part of normal feeding behavior. This appetitive conditioning produces results comparable to those obtained with chemical stimuli in aversive conditioning protocols. We applied a logistic model with general estimating equations to predict the dynamics of learning, which successfully predicts the outcome of training and provides a quantitative estimate of the rate of learning. Use of acoustic stimuli with appetitive conditioning provides both an alternative to models most commonly used in studies of learning and memory in Drosophila and a means of testing hearing in both sexes, independently of courtship responsiveness. PMID:21832129

  3. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  4. Item response theory - A first approach

    NASA Astrophysics Data System (ADS)

    Nunes, Sandra; Oliveira, Teresa; Oliveira, Amílcar

    2017-07-01

    The Item Response Theory (IRT) has become one of the most popular scoring frameworks for measurement data, frequently used in computerized adaptive testing, cognitively diagnostic assessment and test equating. According to Andrade et al. (2000), IRT can be defined as a set of mathematical models (Item Response Models - IRM) constructed to represent the probability of an individual giving the right answer to an item of a particular test. The number of Item Responsible Models available to measurement analysis has increased considerably in the last fifteen years due to increasing computer power and due to a demand for accuracy and more meaningful inferences grounded in complex data. The developments in modeling with Item Response Theory were related with developments in estimation theory, most remarkably Bayesian estimation with Markov chain Monte Carlo algorithms (Patz & Junker, 1999). The popularity of Item Response Theory has also implied numerous overviews in books and journals, and many connections between IRT and other statistical estimation procedures, such as factor analysis and structural equation modeling, have been made repeatedly (Van der Lindem & Hambleton, 1997). As stated before the Item Response Theory covers a variety of measurement models, ranging from basic one-dimensional models for dichotomously and polytomously scored items and their multidimensional analogues to models that incorporate information about cognitive sub-processes which influence the overall item response process. The aim of this work is to introduce the main concepts associated with one-dimensional models of Item Response Theory, to specify the logistic models with one, two and three parameters, to discuss some properties of these models and to present the main estimation procedures.

  5. General equations for optimal selection of diagnostic image acquisition parameters in clinical X-ray imaging.

    PubMed

    Zheng, Xiaoming

    2017-12-01

    The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.

  6. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)

  7. Using Weighted Entropy to Rank Chemicals in Quantitative High Throughput Screening Experiments

    PubMed Central

    Shockley, Keith R.

    2014-01-01

    Quantitative high throughput screening (qHTS) experiments can simultaneously produce concentration-response profiles for thousands of chemicals. In a typical qHTS study, a large chemical library is subjected to a primary screen in order to identify candidate hits for secondary screening, validation studies or prediction modeling. Different algorithms, usually based on the Hill equation logistic model, have been used to classify compounds as active or inactive (or inconclusive). However, observed concentration-response activity relationships may not adequately fit a sigmoidal curve. Furthermore, it is unclear how to prioritize chemicals for follow-up studies given the large uncertainties that often accompany parameter estimates from nonlinear models. Weighted Shannon entropy can address these concerns by ranking compounds according to profile-specific statistics derived from estimates of the probability mass distribution of response at the tested concentration levels. This strategy can be used to rank all tested chemicals in the absence of a pre-specified model structure or the approach can complement existing activity call algorithms by ranking the returned candidate hits. The weighted entropy approach was evaluated here using data simulated from the Hill equation model. The procedure was then applied to a chemical genomics profiling data set interrogating compounds for androgen receptor agonist activity. PMID:24056003

  8. Accounting for Multiple Births in Neonatal and Perinatal Trials: Systematic Review and Case Study

    PubMed Central

    Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A

    2010-01-01

    Objectives To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births. To explore the sensitivity of an actual trial to several analytic approaches to multiples. Methods A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The NO CLD trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using non-clustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. Results In the systematic review, most studies did not describe the randomization of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (p<0.01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. Conclusions The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. PMID:19969305

  9. Accounting for multiple births in neonatal and perinatal trials: systematic review and case study.

    PubMed

    Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A

    2010-02-01

    To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births and to explore the sensitivity of an actual trial to several analytic approaches to multiples. A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The Nitric Oxide to Prevent Chronic Lung Disease (NO CLD) trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using nonclustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. In the systematic review, most studies did not describe the random assignment of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (P < .01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. Copyright 2010 Mosby, Inc. All rights reserved.

  10. Modeling of pathogen survival during simulated gastric digestion.

    PubMed

    Koseki, Shige; Mizuno, Yasuko; Sotome, Itaru

    2011-02-01

    The objective of the present study was to develop a mathematical model of pathogenic bacterial inactivation kinetics in a gastric environment in order to further understand a part of the infectious dose-response mechanism. The major bacterial pathogens Listeria monocytogenes, Escherichia coli O157:H7, and Salmonella spp. were examined by using simulated gastric fluid adjusted to various pH values. To correspond to the various pHs in a stomach during digestion, a modified logistic differential equation model and the Weibull differential equation model were examined. The specific inactivation rate for each pathogen was successfully described by a square-root model as a function of pH. The square-root models were combined with the modified logistic differential equation to obtain a complete inactivation curve. Both the modified logistic and Weibull models provided a highly accurate fitting of the static pH conditions for every pathogen. However, while the residuals plots of the modified logistic model indicated no systematic bias and/or regional prediction problems, the residuals plots of the Weibull model showed a systematic bias. The modified logistic model appropriately predicted the pathogen behavior in the simulated gastric digestion process with actual food, including cut lettuce, minced tuna, hamburger, and scrambled egg. Although the developed model enabled us to predict pathogen inactivation during gastric digestion, its results also suggested that the ingested bacteria in the stomach would barely be inactivated in the real digestion process. The results of this study will provide important information on a part of the dose-response mechanism of bacterial pathogens.

  11. Modeling of Pathogen Survival during Simulated Gastric Digestion ▿

    PubMed Central

    Koseki, Shige; Mizuno, Yasuko; Sotome, Itaru

    2011-01-01

    The objective of the present study was to develop a mathematical model of pathogenic bacterial inactivation kinetics in a gastric environment in order to further understand a part of the infectious dose-response mechanism. The major bacterial pathogens Listeria monocytogenes, Escherichia coli O157:H7, and Salmonella spp. were examined by using simulated gastric fluid adjusted to various pH values. To correspond to the various pHs in a stomach during digestion, a modified logistic differential equation model and the Weibull differential equation model were examined. The specific inactivation rate for each pathogen was successfully described by a square-root model as a function of pH. The square-root models were combined with the modified logistic differential equation to obtain a complete inactivation curve. Both the modified logistic and Weibull models provided a highly accurate fitting of the static pH conditions for every pathogen. However, while the residuals plots of the modified logistic model indicated no systematic bias and/or regional prediction problems, the residuals plots of the Weibull model showed a systematic bias. The modified logistic model appropriately predicted the pathogen behavior in the simulated gastric digestion process with actual food, including cut lettuce, minced tuna, hamburger, and scrambled egg. Although the developed model enabled us to predict pathogen inactivation during gastric digestion, its results also suggested that the ingested bacteria in the stomach would barely be inactivated in the real digestion process. The results of this study will provide important information on a part of the dose-response mechanism of bacterial pathogens. PMID:21131530

  12. GFR Estimation: From Physiology to Public Health

    PubMed Central

    Levey, Andrew S.; Inker, Lesley A.; Coresh, Josef

    2014-01-01

    Estimating glomerular filtration rate (GFR) is essential for clinical practice, research, and public health. Appropriate interpretation of estimated GFR (eGFR) requires understanding the principles of physiology, laboratory medicine, epidemiology and biostatistics used in the development and validation of GFR estimating equations. Equations developed in diverse populations are less biased at higher GFR than equations developed in CKD populations and are more appropriate for general use. Equations that include multiple endogenous filtration markers are more precise than equations including a single filtration marker. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations are the most accurate GFR estimating equations that have been evaluated in large, diverse populations and are applicable for general clinical use. The 2009 CKD-EPI creatinine equation is more accurate in estimating GFR and prognosis than the 2006 Modification of Diet in Renal Disease (MDRD) Study equation and provides lower estimates of prevalence of decreased eGFR. It is useful as a “first” test for decreased eGFR and should replace the MDRD Study equation for routine reporting of serum creatinine–based eGFR by clinical laboratories. The 2012 CKD-EPI cystatin C equation is as accurate as the 2009 CKD-EPI creatinine equation in estimating eGFR, does not require specification of race, and may be more accurate in patients with decreased muscle mass. The 2012 CKD-EPI creatinine–cystatin C equation is more accurate than the 2009 CKD-EPI creatinine and 2012 CKD-EPI cystatin C equations and is useful as a confirmatory test for decreased eGFR as determined by an equation based on serum creatinine. Further improvement in GFR estimating equations will require development in more broadly representative populations, including diverse racial and ethnic groups, use of multiple filtration markers, and evaluation using statistical techniques to compare eGFR to “true GFR”. PMID:24485147

  13. Propensity score estimation: machine learning and classification methods as alternatives to logistic regression

    PubMed Central

    Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson

    2010-01-01

    Summary Objective Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this Review was to assess machine learning alternatives to logistic regression which may accomplish the same goals but with fewer assumptions or greater accuracy. Study Design and Setting We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. Results We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (CART), and meta-classifiers (in particular, boosting). Conclusion While the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and to a lesser extent decision trees (particularly CART) appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. PMID:20630332

  14. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  15. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  16. Equal Area Logistic Estimation for Item Response Theory

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li

    2009-08-01

    Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.

  17. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Impact of national smoke-free legislation on home smoking bans: findings from the International Tobacco Control Policy Evaluation Project Europe Surveys.

    PubMed

    Mons, Ute; Nagelhout, Gera E; Allwright, Shane; Guignard, Romain; van den Putte, Bas; Willemsen, Marc C; Fong, Geoffrey T; Brenner, Hermann; Pötschke-Langer, Martina; Breitling, Lutz P

    2013-05-01

    To measure changes in prevalence and predictors of home smoking bans (HSBs) among smokers in four European countries after the implementation of national smoke-free legislation. Two waves of the International Tobacco Control Policy Evaluation Project Europe Surveys, which is a prospective panel study. Pre- and post-legislation data were used from Ireland, France, Germany and the Netherlands. Two pre-legislation waves from the UK were used as control. 4634 respondents from the intervention countries and 1080 from the control country completed both baseline and follow-up and were included in the present analyses. Multiple logistic regression models to identify predictors of having or of adopting a total HSB, and Generalised Estimating Equation models to compare patterns of change after implementation of smoke-free legislation to a control country without such legislation. Most smokers had at least partial smoking restrictions in their home, but the proportions varied significantly between countries. After implementation of national smoke-free legislation, the proportion of smokers with a total HSB increased significantly in all four countries. Among continuing smokers, the number of cigarettes smoked per day either remained stable or decreased significantly. Multiple logistic regression models indicated that having a young child in the household and supporting smoking bans in bars were important correlates of having a pre-legislation HSB. Prospective predictors of imposing a HSB between survey waves were planning to quit smoking, supporting a total smoking ban in bars and the birth of a child. Generalised Estimating Equation models indicated that the change in total HSB in the intervention countries was greater than that in the control country. The findings suggest that smoke-free legislation does not lead to more smoking in smokers' homes. On the contrary, our findings demonstrate that smoke-free legislation may stimulate smokers to establish total smoking bans in their homes.

  19. Clinically important respiratory effects of dust exposure and smoking in British coal miners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marine, W.M.; Gurr, D.; Jacobsen, M.

    A unique data set of 3380 British coal miners has been reanalyzed with major focus on nonpneumoconiotic respiratory conditions. The aim was to assess the independent contribution of smoking and exposure to respirable dust to clinically significant measures of respiratory dysfunction. Exposure to coal-mine dust was monitored over a 10-yr period. Medical surveys provided estimates of prior dust exposure and recorded respiratory symptoms. Each man's FEV1 was compared with the level predicted for his age and height by an internally derived prediction equation for FEV1. Four respiratory indices were considered at the end of the 10-yr period: FEV1 less thanmore » 80%, chronic bronchitis, chronic bronchitis with FEV1 less than 80%, and FEV1 less than 65%. Results were uniformly incorporated into logistic regression equations for each condition. The equations include coefficients for age, dust, and when indicated, an interaction term for age and dust. Dust-related increases in prevalence of each of the 4 conditions were statistically significant and were similar for smokers and nonsmokers at the mean age (47 yr). There was no evidence that smoking potentiates the effect of exposure to dust. Estimates of prevalences at the mean age of all 4 measures of respiratory dysfunction were greater in smokers. At intermediate and high dust exposure the prevalence of the 4 conditions in nonsmokers approached the prevalence in smokers at hypothetically zero dust exposure. Both smoking and dust exposure can cause clinically important respiratory dysfunction and their separate contributions to obstructive airway disease in coal miners appear to be additive.« less

  20. Biological Applications in the Mathematics Curriculum

    ERIC Educational Resources Information Center

    Marland, Eric; Palmer, Katrina M.; Salinas, Rene A.

    2008-01-01

    In this article we provide two detailed examples of how we incorporate biological examples into two mathematics courses: Linear Algebra and Ordinary Differential Equations. We use Leslie matrix models to demonstrate the biological properties of eigenvalues and eigenvectors. For Ordinary Differential Equations, we show how using a logistic growth…

  1. Upgrade Summer Severe Weather Tool

    NASA Technical Reports Server (NTRS)

    Watson, Leela

    2011-01-01

    The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.

  2. Advanced Targeting Cost Function Design for Evolutionary Optimization of Control of Logistic Equation

    NASA Astrophysics Data System (ADS)

    Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana

    2010-06-01

    This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.

  3. Socioeconomic Disparities in Telephone-Based Treatment of Tobacco Dependence

    PubMed Central

    Varghese, Merilyn; Stitzer, Maxine; Landes, Reid; Brackman, S. Laney; Munn, Tiffany

    2014-01-01

    Objectives. We examined socioeconomic disparities in tobacco dependence treatment outcomes from a free, proactive telephone counseling quitline. Methods. We delivered cognitive–behavioral treatment and nicotine patches to 6626 smokers and examined socioeconomic differences in demographic, clinical, environmental, and treatment use factors. We used logistic regressions and generalized estimating equations (GEE) to model abstinence and account for socioeconomic differences in the models. Results. The odds of achieving long-term abstinence differed by socioeconomic status (SES). In the GEE model, the odds of abstinence for the highest SES participants were 1.75 times those of the lowest SES participants. Logistic regression models revealed no treatment outcome disparity at the end of treatment, but significant disparities 3 and 6 months after treatment. Conclusions. Although quitlines often increase access to treatment for some lower SES smokers, significant socioeconomic disparities in treatment outcomes raise questions about whether current approaches are contributing to tobacco-related socioeconomic health disparities. Strategies to improve treatment outcomes for lower SES smokers might include novel methods to address multiple factors associated with socioeconomic disparities. PMID:24922165

  4. An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho

    Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…

  5. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  6. Alternative approach to modeling bacterial lag time, using logistic regression as a function of time, temperature, pH, and sodium chloride concentration.

    PubMed

    Koseki, Shige; Nonaka, Junko

    2012-09-01

    The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation.

  7. Application of logistic regression for landslide susceptibility zoning of Cekmece Area, Istanbul, Turkey

    NASA Astrophysics Data System (ADS)

    Duman, T. Y.; Can, T.; Gokceoglu, C.; Nefeslioglu, H. A.; Sonmez, H.

    2006-11-01

    As a result of industrialization, throughout the world, cities have been growing rapidly for the last century. One typical example of these growing cities is Istanbul, the population of which is over 10 million. Due to rapid urbanization, new areas suitable for settlement and engineering structures are necessary. The Cekmece area located west of the Istanbul metropolitan area is studied, because the landslide activity is extensive in this area. The purpose of this study is to develop a model that can be used to characterize landslide susceptibility in map form using logistic regression analysis of an extensive landslide database. A database of landslide activity was constructed using both aerial-photography and field studies. About 19.2% of the selected study area is covered by deep-seated landslides. The landslides that occur in the area are primarily located in sandstones with interbedded permeable and impermeable layers such as claystone, siltstone and mudstone. About 31.95% of the total landslide area is located at this unit. To apply logistic regression analyses, a data matrix including 37 variables was constructed. The variables used in the forwards stepwise analyses are different measures of slope, aspect, elevation, stream power index (SPI), plan curvature, profile curvature, geology, geomorphology and relative permeability of lithological units. A total of 25 variables were identified as exerting strong influence on landslide occurrence, and included by the logistic regression equation. Wald statistics values indicate that lithology, SPI and slope are more important than the other parameters in the equation. Beta coefficients of the 25 variables included the logistic regression equation provide a model for landslide susceptibility in the Cekmece area. This model is used to generate a landslide susceptibility map that correctly classified 83.8% of the landslide-prone areas.

  8. Propensity score estimation: neural networks, support vector machines, decision trees (CART), and meta-classifiers as alternatives to logistic regression.

    PubMed

    Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson

    2010-08-01

    Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this review was to assess machine learning alternatives to logistic regression, which may accomplish the same goals but with fewer assumptions or greater accuracy. We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (classification and regression trees [CART]), and meta-classifiers (in particular, boosting). Although the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and, to a lesser extent, decision trees (particularly CART), appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  9. Utility of Equations to Estimate Peak Oxygen Uptake and Work Rate From a 6-Minute Walk Test in Patients With COPD in a Clinical Setting.

    PubMed

    Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G

    2015-01-01

    To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.

  10. [Effects of dissolved oxygen and pH on Candida utilis batch fermentation of glutathione].

    PubMed

    Wei, Gong-Yuan; Li, Yin; Du, Guo-Cheng; Chen, Jian

    2003-11-01

    The effects of dissolved oxygen (DO) and pH on glutathione batch fermentation by Candida utilis WSH-02-08 in a 7 liters stirred fermentor were investigated. It was shown that DO concentration is an important factor in glutathione production. With the initial glucose concentration of 30 g/L and a 5 L/min air flow rate, and the agitation rate less than 250 r/min, the DO concentration was not sufficient to satisfy the oxygen requirement during the fermentation. With an agitation rate of more than 300 r/min, the cell growth and glutathione production were enhanced significantly, with the dry cell mass and glutathione production were 20% and 25% higher than that at 200 r/min. When C. utilis WSH 02-08 was cultivated in a batch process without pH control, cell growth and glutathione production were inhibited, likely due to a dramatic decrease in the pH. Intracellular glutathione leakages were observed when the pH was 1.5 or less. To assess the effect of pH on glutathione production, six batch processes controlled at pH 4.0, 4.5, 5.0, 5.5, 6.0 and 6.5 were conducted. The yield was highest at pH 5.5, when the dry cell mass and yield were 27% and 95% respectively higher than fermentation without pH control. The maximal intracellular glutathione content (2.15 %) was also achieved at the pH. To improve our understandings on the effect of pH on the batch glutathione production, a modified Logistic equation and Luedeking-Piret equation were used to simulate cell growth and glutathione production, respectively, under different pH. Based on the parameters obtained by the nonlinear estimation, kinetic analysis was performed to elucidate the effect of pH on the batch glutathione production. The process controlled at pH 5.5 was proven to be the best due to the higher value of K(I) (substrate inhibitory constant in the Logistic equation), lower value of a and higher value of beta (slope and intercept in the Luedeking-Piret equation, respectively).

  11. Skinfold Prediction Equations Fail to Provide an Accurate Estimate of Body Composition in Elite Rugby Union Athletes of Caucasian and Polynesian Ethnicity.

    PubMed

    Zemski, Adam J; Broad, Elizabeth M; Slater, Gary J

    2018-01-01

    Body composition in elite rugby union athletes is routinely assessed using surface anthropometry, which can be utilized to provide estimates of absolute body composition using regression equations. This study aims to assess the ability of available skinfold equations to estimate body composition in elite rugby union athletes who have unique physique traits and divergent ethnicity. The development of sport-specific and ethnicity-sensitive equations was also pursued. Forty-three male international Australian rugby union athletes of Caucasian and Polynesian descent underwent surface anthropometry and dual-energy X-ray absorptiometry (DXA) assessment. Body fat percent (BF%) was estimated using five previously developed equations and compared to DXA measures. Novel sport and ethnicity-sensitive prediction equations were developed using forward selection multiple regression analysis. Existing skinfold equations provided unsatisfactory estimates of BF% in elite rugby union athletes, with all equations demonstrating a 95% prediction interval in excess of 5%. The equations tended to underestimate BF% at low levels of adiposity, whilst overestimating BF% at higher levels of adiposity, regardless of ethnicity. The novel equations created explained a similar amount of variance to those previously developed (Caucasians 75%, Polynesians 90%). The use of skinfold equations, including the created equations, cannot be supported to estimate absolute body composition. Until a population-specific equation is established that can be validated to precisely estimate body composition, it is advocated to use a proven method, such as DXA, when absolute measures of lean and fat mass are desired, and raw anthropometry data routinely to derive an estimate of body composition change.

  12. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  13. The effective boundary conditions and their lifespan of the logistic diffusion equation on a coated body

    NASA Astrophysics Data System (ADS)

    Li, Huicong; Wang, Xuefeng; Wu, Yanxia

    2014-11-01

    We consider the logistic diffusion equation on a bounded domain, which has two components with a thin coating surrounding a body. The diffusion tensor is isotropic on the body, and anisotropic on the coating. The size of the diffusion tensor on these components may be very different; within the coating, the diffusion rates in the normal and tangent directions may be in different scales. We find effective boundary conditions (EBCs) that are approximately satisfied by the solution of the diffusion equation on the boundary of the body. We also prove that the lifespan of each EBC, which measures how long the EBC remains effective, is infinite. The EBCs enable us to see clearly the effect of the coating and ease the difficult task of solving the PDE in a thin region with a small diffusion tensor. The motivation of the mathematics includes a nature reserve surrounded by a buffer zone.

  14. Recent im/migration to Canada linked to unmet health needs among sex workers in Vancouver, Canada: Findings of a longitudinal study

    PubMed Central

    Sou, Julie; Goldenberg, Shira M.; Duff, Putu; Nguyen, Paul; Shoveller, Jean; Shannon, Kate

    2017-01-01

    Despite universal health care in Canada, sex workers (SW) and im/migrants experience suboptimal health care access. In this analysis, we examined the correlates of unmet health needs among SWs in Metro Vancouver over time. Data from a longitudinal cohort of women SWs (AESHA) was used. Of 742 SWs, 25.5% reported unmet health needs at least once over the 4-year study period. In multivariable logistic regression using generalized estimating equations, recent im/migration had the strongest impact on unmet health needs; long-term im/migration, policing, and trauma were also important determinants. Legal and social supports to promote im/migrant SWs’ access to health care are recommended. PMID:28300492

  15. The dose-response relationship between the patch test and ROAT and the potential use for regulatory purposes.

    PubMed

    Fischer, Louise Arup; Voelund, Aage; Andersen, Klaus Ejner; Menné, Torkil; Johansen, Jeanne Duus

    2009-10-01

    Allergic contact dermatitis is common and can be prevented. The relationship between thresholds for patch tests and the repeated open application test (ROAT) is unclear. It would be desirable if patch test and ROAT data from already sensitized individuals could be used in prevention. The aim was to develop an equation that could predict the response to an allergen in a ROAT based on the dose-response curve derived by patch testing. Results from two human experimental elicitation studies with non-volatile allergens, nickel and the preservative methyldibromo glutaronitrile (MDBGN), were analysed by logistic dose-response statistics. The relation for volatile compounds was investigated using the results from experiments with the fragrance chemicals hydroxyisohexyl 3-cyclohexene carboxaldehyde and isoeugenol. For non-volatile compounds, the outcome of a ROAT can be estimated from the patch test by: ED(xx)(ROAT) = 0.0296 ED(xx)(patch test). For volatile compounds, the equation predicts that the response in the ROAT is more severe than the patch test response, but it overestimates the response. This equation may be used for non-volatile compounds other than nickel and MDBGN, after further validation. The relationship between the patch test and the ROAT can be used for prevention, to set safe levels of allergen exposure based on patch test data.

  16. Accounting for center in the Early External Cephalic Version trials: an empirical comparison of statistical methods to adjust for center in a multicenter trial with binary outcomes.

    PubMed

    Reitsma, Angela; Chu, Rong; Thorpe, Julia; McDonald, Sarah; Thabane, Lehana; Hutton, Eileen

    2014-09-26

    Clustering of outcomes at centers involved in multicenter trials is a type of center effect. The Consolidated Standards of Reporting Trials Statement recommends that multicenter randomized controlled trials (RCTs) should account for center effects in their analysis, however most do not. The Early External Cephalic Version (EECV) trials published in 2003 and 2011 stratified by center at randomization, but did not account for center in the analyses, and due to the nature of the intervention and number of centers, may have been prone to center effects. Using data from the EECV trials, we undertook an empirical study to compare various statistical approaches to account for center effect while estimating the impact of external cephalic version timing (early or delayed) on the outcomes of cesarean section, preterm birth, and non-cephalic presentation at the time of birth. The data from the EECV pilot trial and the EECV2 trial were merged into one dataset. Fisher's exact method was used to test the overall effect of external cephalic version timing unadjusted for center effects. Seven statistical models that accounted for center effects were applied to the data. The models included: i) the Mantel-Haenszel test, ii) logistic regression with fixed center effect and fixed treatment effect, iii) center-size weighted and iv) un-weighted logistic regression with fixed center effect and fixed treatment-by-center interaction, iv) logistic regression with random center effect and fixed treatment effect, v) logistic regression with random center effect and random treatment-by-center interaction, and vi) generalized estimating equations. For each of the three outcomes of interest approaches to account for center effect did not alter the overall findings of the trial. The results were similar for the majority of the methods used to adjust for center, illustrating the robustness of the findings. Despite literature that suggests center effect can change the estimate of effect in multicenter trials, this empirical study does not show a difference in the outcomes of the EECV trials when accounting for center effect. The EECV2 trial was registered on 30 July 30 2005 with Current Controlled Trials: ISRCTN 56498577.

  17. Quantitative Analysis of Land Loss in Coastal Louisiana Using Remote Sensing

    NASA Astrophysics Data System (ADS)

    Wales, P. M.; Kuszmaul, J.; Roberts, C.

    2005-12-01

    For the past thirty-five years the land loss along the Louisiana Coast has been recognized as a growing problem. One of the clearest indicators of this land loss is that in 2000 smooth cord grass (spartina alterniflora) was turning brown well before its normal hibernation period. Over 100,000 acres of marsh were affected by the 2000 browning. In 2001 data were collected using low altitude helicopter based transects of the coast, with 7,400 data points being collected by researchers at the USGS, National Wetlands Research Center, and Louisiana Department of Natural Resources. The surveys contained data describing the characteristics of the marsh, including latitude, longitude, marsh condition, marsh color, percent vegetated, and marsh die-back. Creating a model that combines remote sensing images, field data, and statistical analysis to develop a methodology for estimating the margin of error in measurements of coastal land loss (erosion) is the ultimate goal of the study. A model was successfully created using a series of band combinations (used as predictive variables). The most successful band combinations or predictive variables were the braud value [(Sum Visible TM Bands - Sum Infrared TM Bands)/(Sum Visible TM Bands + Sum Infrared TM Bands)], TM band 7/ TM band 2, brightness, NDVI, wetness, vegetation index, and a 7x7 autocovariate nearest neighbor floating window. The model values were used to generate the logistic regression model. A new image was created based on the logistic regression probability equation where each pixel represents the probability of finding water or non-water at that location in each image. Pixels within each image that have a high probability of representing water have a value close to 1 and pixels with a low probability of representing water have a value close to 0. A logistic regression model is proposed that uses seven independent variables. This model yields an accurate classification in 86.5% of the locations considered in the 1997 and 2001 survey locations. When the logistic regression was modeled to the satellite imagery of the entire Louisiana Coast study area a statewide loss was estimated to be 358 mi2 to 368 mi2, from 1997 to 2001, using two different methods for estimating land loss.

  18. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  19. Weather adjustment using seemingly unrelated regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noll, T.A.

    1995-05-01

    Seemingly unrelated regression (SUR) is a system estimation technique that accounts for time-contemporaneous correlation between individual equations within a system of equations. SUR is suited to weather adjustment estimations when the estimation is: (1) composed of a system of equations and (2) the system of equations represents either different weather stations, different sales sectors or a combination of different weather stations and different sales sectors. SUR utilizes the cross-equation error values to develop more accurate estimates of the system coefficients than are obtained using ordinary least-squares (OLS) estimation. SUR estimates can be generated using a variety of statistical software packagesmore » including MicroTSP and SAS.« less

  20. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  1. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries.

    PubMed

    Cunningham, Marc; Bock, Ariella; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-09-01

    Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. © Cunningham et al.

  2. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries

    PubMed Central

    Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-01-01

    Background: Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Methods: Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. Results: For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805

  3. Direct Logistic Fuel JP-8 Conversion in a Liquid Tin Anode Solid Oxide Fuel Cell (LTA-SOFC)

    DTIC Science & Technology

    2008-04-09

    GeSnOOSn sgl [1] As governed by the Nernst equation Open Circuit Voltage (OCV) is inversely proportional to temperature. The OCV of...inherently stable at 1,000°C. The LTA-SOFC electrochemical reaction is based on the following thermodynamic equation . C1000T kJ 311 42 o)(2... equation 1 is 0.8V at 1000°C, using an oxygen partial pressure of one. This equation gives the OCV for a LTA–SOFC functioning as a battery. The tin oxide

  4. Using Multiple and Logistic Regression to Estimate the Median WillCost and Probability of Cost and Schedule Overrun for Program Managers

    DTIC Science & Technology

    2017-03-23

    PUBLIC RELEASE; DISTRIBUTION UNLIMITED Using Multiple and Logistic Regression to Estimate the Median Will- Cost and Probability of Cost and... Cost and Probability of Cost and Schedule Overrun for Program Managers Ryan C. Trudelle Follow this and additional works at: https://scholar.afit.edu...afit.edu. Recommended Citation Trudelle, Ryan C., "Using Multiple and Logistic Regression to Estimate the Median Will- Cost and Probability of Cost and

  5. Expression of Proteins Involved in Epithelial-Mesenchymal Transition as Predictors of Metastasis and Survival in Breast Cancer Patients

    DTIC Science & Technology

    2013-11-01

    Ptrend 0.78 0.62 0.75 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of node...Ptrend 0.71 0.67 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of high-grade tumors... logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for the associations between each of the seven SNPs and

  6. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  7. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  8. Bifurcation Analysis and Application for Impulsive Systems with Delayed Impulses

    NASA Astrophysics Data System (ADS)

    Church, Kevin E. M.; Liu, Xinzhi

    In this article, we present a systematic approach to bifurcation analysis of impulsive systems with autonomous or periodic right-hand sides that may exhibit delayed impulse terms. Methods include Lyapunov-Schmidt reduction and center manifold reduction. Both methods are presented abstractly in the context of the stroboscopic map associated to a given impulsive system, and are illustrated by way of two in-depth examples: the analysis of a SIR model of disease transmission with seasonality and unevenly distributed moments of treatment, and a scalar logistic differential equation with a delayed census impulsive harvesting effort. It is proven that in some special cases, the logistic equation can exhibit a codimension two bifurcation at a 1:1 resonance point.

  9. The logistics of choice.

    PubMed

    Killeen, Peter R

    2015-07-01

    The generalized matching law (GML) is reconstructed as a logistic regression equation that privileges no particular value of the sensitivity parameter, a. That value will often approach 1 due to the feedback that drives switching that is intrinsic to most concurrent schedules. A model of that feedback reproduced some features of concurrent data. The GML is a law only in the strained sense that any equation that maps data is a law. The machine under the hood of matching is in all likelihood the very law that was displaced by the Matching Law. It is now time to return the Law of Effect to centrality in our science. © Society for the Experimental Analysis of Behavior.

  10. A method for estimating mean and low flows of streams in national forests of Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1985-01-01

    Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)

  11. Relationship between long working hours and depression in two working populations: a structural equation model approach.

    PubMed

    Amagasa, Takashi; Nakayama, Takeo

    2012-07-01

    To test the hypothesis that relationship reported between long working hours and depression was inconsistent in previous studies because job demand was treated as a confounder. Structural equation modeling was used to construct five models, using work-related factors and depressive mood scale obtained from 218 clerical workers, to test for goodness of fit and was externally validated with data obtained from 1160 sales workers. Multiple logistic regression analysis was also performed. The model that showed that long working hours increased depression risk when job demand was regarded as an intermediate variable was the best fitted model (goodness-of-fit index/root-mean-square error of approximation: 0.981 to 0.996/0.042 to 0.044). The odds ratio for depression risk with work that was high demand and 60 hours or more per week was estimated at 2 to 4 versus work that was low demand and less than 60 hours per week. Long working hours increased depression risk, with job demand being an intermediate variable.

  12. Matched samples logistic regression in case-control studies with missing values: when to break the matches.

    PubMed

    Hansson, Lisbeth; Khamis, Harry J

    2008-12-01

    Simulated data sets are used to evaluate conditional and unconditional maximum likelihood estimation in an individual case-control design with continuous covariates when there are different rates of excluded cases and different levels of other design parameters. The effectiveness of the estimation procedures is measured by method bias, variance of the estimators, root mean square error (RMSE) for logistic regression and the percentage of explained variation. Conditional estimation leads to higher RMSE than unconditional estimation in the presence of missing observations, especially for 1:1 matching. The RMSE is higher for the smaller stratum size, especially for the 1:1 matching. The percentage of explained variation appears to be insensitive to missing data, but is generally higher for the conditional estimation than for the unconditional estimation. It is particularly good for the 1:2 matching design. For minimizing RMSE, a high matching ratio is recommended; in this case, conditional and unconditional logistic regression models yield comparable levels of effectiveness. For maximizing the percentage of explained variation, the 1:2 matching design with the conditional logistic regression model is recommended.

  13. Improving North American forest biomass estimates from literature synthesis and meta-analysis of existing biomass equations

    Treesearch

    David C. Chojnacky; Jennifer C. Jenkins; Amanda K. Holland

    2009-01-01

    Thousands of published equations purport to estimate biomass of individual trees. These equations are often based on very small samples, however, and can provide widely different estimates for trees of the same species. We addressed this issue in a previous study by devising 10 new equations that estimated total aboveground biomass for all species in North America (...

  14. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    PubMed

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model (Markov) that needs the parameterization of transition probabilities, and only has summary KM plots available.

  15. Analysis of Binary Adherence Data in the Setting of Polypharmacy: A Comparison of Different Approaches

    PubMed Central

    Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.

    2009-01-01

    Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358

  16. Using exploratory data analysis to identify and predict patterns of human Lyme disease case clustering within a multistate region, 2010-2014.

    PubMed

    Hendricks, Brian; Mark-Carew, Miguella

    2017-02-01

    Lyme disease is the most commonly reported vectorborne disease in the United States. The objective of our study was to identify patterns of Lyme disease reporting after multistate inclusion to mitigate potential border effects. County-level human Lyme disease surveillance data were obtained from Kentucky, Maryland, Ohio, Pennsylvania, Virginia, and West Virginia state health departments. Rate smoothing and Local Moran's I was performed to identify clusters of reporting activity and identify spatial outliers. A logistic generalized estimating equation was performed to identify significant associations in disease clustering over time. Resulting analyses identified statistically significant (P=0.05) clusters of high reporting activity and trends over time. High reporting activity aggregated near border counties in high incidence states, while low reporting aggregated near shared county borders in non-high incidence states. Findings highlight the need for exploratory surveillance approaches to describe the extent to which state level reporting affects accurate estimation of Lyme disease progression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. 48 CFR 715.370-1 - Title XII selection procedure-general.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... essential, a statement of work, estimate of personnel requirements, special requirements (logistic support... statement of work, an estimate of the personnel required, and special provisions (such as logistic support...

  18. 48 CFR 715.370-1 - Title XII selection procedure-general.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... essential, a statement of work, estimate of personnel requirements, special requirements (logistic support... statement of work, an estimate of the personnel required, and special provisions (such as logistic support...

  19. 48 CFR 715.370-1 - Title XII selection procedure-general.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... essential, a statement of work, estimate of personnel requirements, special requirements (logistic support... statement of work, an estimate of the personnel required, and special provisions (such as logistic support...

  20. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  1. Fungible weights in logistic regression.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Assessing the Effect of an Old and New Methodology for Scale Conversion on Examinee Scores

    ERIC Educational Resources Information Center

    Rizavi, Saba; Smith, Robert; Carey, Jill

    2002-01-01

    Research has been done to look at the benefits of BILOG over LOGIST as well as the potential issues that can arise if transition from LOGIST to BILOG is desired. A serious concern arises when comparability is required between previously calibrated LOGIST parameter estimates and currently calibrated BILOG estimates. It is imperative to obtain an…

  3. Odds Ratio, Delta, ETS Classification, and Standardization Measures of DIF Magnitude for Binary Logistic Regression

    ERIC Educational Resources Information Center

    Monahan, Patrick O.; McHorney, Colleen A.; Stump, Timothy E.; Perkins, Anthony J.

    2007-01-01

    Previous methodological and applied studies that used binary logistic regression (LR) for detection of differential item functioning (DIF) in dichotomously scored items either did not report an effect size or did not employ several useful measures of DIF magnitude derived from the LR model. Equations are provided for these effect size indices.…

  4. Stable Algorithm For Estimating Airdata From Flush Surface Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen, A. (Inventor); Cobleigh, Brent R. (Inventor); Haering, Edward A., Jr. (Inventor)

    2001-01-01

    An airdata estimation and evaluation system and method, including a stable algorithm for estimating airdata from nonintrusive surface pressure measurements. The airdata estimation and evaluation system is preferably implemented in a flush airdata sensing (FADS) system. The system and method of the present invention take a flow model equation and transform it into a triples formulation equation. The triples formulation equation eliminates the pressure related states from the flow model equation by strategically taking the differences of three surface pressures, known as triples. This triples formulation equation is then used to accurately estimate and compute vital airdata from nonintrusive surface pressure measurements.

  5. Predicting of biomass in Brazilian tropical dry forest: a statistical evaluation of generic equations.

    PubMed

    Lima, Robson B DE; Alves, Francisco T; Oliveira, Cinthia P DE; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Dry tropical forests are a key component in the global carbon cycle and their biomass estimates depend almost exclusively of fitted equations for multi-species or individual species data. Therefore, a systematic evaluation of statistical models through validation of estimates of aboveground biomass stocks is justifiable. In this study was analyzed the capacity of generic and specific equations obtained from different locations in Mexico and Brazil, to estimate aboveground biomass at multi-species levels and for four different species. Generic equations developed in Mexico and Brazil performed better in estimating tree biomass for multi-species data. For Poincianella bracteosa and Mimosa ophthalmocentra, only the Sampaio and Silva (2005) generic equation was the most recommended. These equations indicate lower tendency and lower bias, and biomass estimates for these equations are similar. For the species Mimosa tenuiflora, Aspidosperma pyrifolium and for the genus Croton the specific regional equations are more recommended, although the generic equation of Sampaio and Silva (2005) is not discarded for biomass estimates. Models considering gender, families, successional groups, climatic variables and wood specific gravity should be adjusted, tested and the resulting equations should be validated at both local and regional levels as well as on the scales of tropics with dry forest dominance.

  6. Ten-year risk-rating systems for California red fir and white fir: development and use

    Treesearch

    George T. Ferrell

    1989-01-01

    Logistic regression equations predicting the probability that a tree will die from natural causes--insects, diseases, intertree competition--within 10 years have been developed for California red fir (Abies magnifica) and white fir (A. concolor). The equations, like those with a 5-year prediction period already developed for these...

  7. Comparison of methods for estimating carbon dioxide storage by Sacramento's urban forest

    Treesearch

    Elena Aguaron; E. Gregory McPherson

    2012-01-01

    Limited open-grown urban tree species biomass equations have necessitated use of forest-derived equations with diverse conclusions on the accuracy of these equations to estimate urban biomass and carbon storage. Our goal was to determine and explain variability among estimates of CO2 storage from four sets of allometric equations for the same...

  8. Satellite rainfall retrieval by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.

    1986-01-01

    The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.

  9. Performance of Chronic Kidney Disease Epidemiology Collaboration Creatinine-Cystatin C Equation for Estimating Kidney Function in Cirrhosis

    PubMed Central

    Mindikoglu, Ayse L.; Dowling, Thomas C.; Weir, Matthew R.; Seliger, Stephen L.; Christenson, Robert H.; Magder, Laurence S.

    2013-01-01

    Conventional creatinine-based glomerular filtration rate (GFR) equations are insufficiently accurate for estimating GFR in cirrhosis. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) recently proposed an equation to estimate GFR in subjects without cirrhosis using both serum creatinine and cystatin C levels. Performance of the new CKD-EPI creatinine-cystatin C equation (2012) was superior to previous creatinine- or cystatin C-based GFR equations. To evaluate the performance of the CKD-EPI creatinine-cystatin C equation in subjects with cirrhosis, we compared it to GFR measured by non-radiolabeled iothalamate plasma clearance (mGFR) in 72 subjects with cirrhosis. We compared the “bias”, “precision” and “accuracy” of the new CKD-EPI creatinine-cystatin C equation to that of 24-hour urinary creatinine clearance (CrCl), Cockcroft-Gault (CG) and previously reported creatinine- and/or cystatin C-based GFR-estimating equations. Accuracy of CKD-EPI creatinine-cystatin C equation as quantified by root mean squared error of difference scores [differences between mGFR and estimated GFR (eGFR) or between mGFR and CrCl, or between mGFR and CG equation for each subject] (RMSE=23.56) was significantly better than that of CrCl (37.69, P=0.001), CG (RMSE=36.12, P=0.002) and GFR-estimating equations based on cystatin C only. Its accuracy as quantified by percentage of eGFRs that differed by greater than 30% with respect to mGFR was significantly better compared to CrCl (P=0.024), CG (P=0.0001), 4-variable MDRD (P=0.027) and CKD-EPI creatinine 2009 (P=0.012) equations. However, for 23.61% of the subjects, GFR estimated by CKD-EPI creatinine-cystatin C equation differed from the mGFR by more than 30%. CONCLUSIONS The diagnostic performance of CKD-EPI creatinine-cystatin C equation (2012) in patients with cirrhosis was superior to conventional equations in clinical practice for estimating GFR. However, its diagnostic performance was substantially worse than reported in subjects without cirrhosis. PMID:23744636

  10. Application of conditional moment tests to model checking for generalized linear models.

    PubMed

    Pan, Wei

    2002-06-01

    Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.

  11. Recent im/migration to Canada linked to unmet health needs among sex workers in Vancouver, Canada: Findings of a longitudinal study.

    PubMed

    Sou, Julie; Goldenberg, Shira M; Duff, Putu; Nguyen, Paul; Shoveller, Jean; Shannon, Kate

    2017-05-01

    Despite universal health care in Canada, sex workers (SWs) and im/migrants experience suboptimal health care access. In this analysis, we examined the correlates of unmet health needs among SWs in Metro Vancouver over time. Data from a longitudinal cohort of women SWs (An Evaluation of Sex Workers Health Access [AESHA]) were used. Of 742 SWs, 25.5% reported unmet health needs at least once over the 4-year study period. In multivariable logistic regression using generalized estimating equations, recent im/migration had the strongest impact on unmet health needs; long-term im/migration, policing, and trauma were also important determinants. Legal and social supports to promote im/migrant SWs' access to health care are recommended.

  12. Identification of the population density of a species model with nonlocal diffusion and nonlinear reaction

    NASA Astrophysics Data System (ADS)

    Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel

    2017-05-01

    The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.

  13. A social ecological assessment of physical activity among urban adolescents.

    PubMed

    Yan, Alice Fang; Voorhees, Carolyn C; Beck, Kenneth H; Wang, Min Qi

    2014-05-01

    To examine the physical, social and temporal contexts of physical activity, as well as sex variations of the associations among 314 urban adolescents. Three-day physical activity recall measured contextual information of physical activities. Logistic regressions and generalized estimating equation models examined associations among physical activity types and contexts, and sex differences. Active transportation was the most common physical activity. Home/neighborhood and school were the most common physical activity locations. School was the main location for organized physical activity. Boys spent more time on recreational physical activity, regardless of the social context, compared to girls. The average physical activity level was significantly lower for girls than for boys after school. Physical activity promotion interventions need to target physical activity environments and social contexts in a sex-specific manner.

  14. Equations for estimating selected streamflow statistics in Rhode Island

    USGS Publications Warehouse

    Bent, Gardner C.; Steeves, Peter A.; Waite, Andrew M.

    2014-01-01

    The equations, which are based on data from streams with little to no flow alterations, will provide an estimate of the natural flows for a selected site. They will not estimate flows for altered sites with dams, surface-water withdrawals, groundwater withdrawals (pumping wells), diversions, and wastewater discharges. If the equations are used to estimate streamflow statistics for altered sites, the user should adjust the flow estimates for the alterations. The regression equations should be used only for ungaged sites with drainage areas between 0.52 and 294 square miles and stream densities between 0.94 and 3.49 miles per square mile; these are the ranges of the explanatory variables in the equations.

  15. eGFRs from Asian-modified CKD-EPI and Chinese-modified CKD-EPI equations were associated better with hypertensive target organ damage in the community-dwelling elderly Chinese: the Northern Shanghai Study.

    PubMed

    Ji, Hongwei; Zhang, Han; Xiong, Jing; Yu, Shikai; Chi, Chen; Bai, Bin; Li, Jue; Blacher, Jacques; Zhang, Yi; Xu, Yawei

    2017-01-01

    With increasing age, estimated glomerular filtration rate (eGFR) decline is a frequent manifestation and is strongly associated with other preclinical target organ damage (TOD). In literature, many equations exist in assessing patients' eGFR. However, these equations were mainly derived and validated in the population from Western countries, which equation should be used for risk stratification in the Chinese population remains unclear, as well as their comparison. Considering that TOD is a good marker for risk stratification in the elderly, in this analysis, we aimed to investigate whether the recent eGFR equations derived from Asian and Chinese are better associated with preclinical TOD than the other equations in elderly Chinese. A total of 1,599 community-dwelling elderly participants (age >65 years) in northern Shanghai were prospectively recruited from June 2014 to August 2015. Conventional cardiovascular risk factors were assessed, and hypertensive TOD including left ventricular mass index (LVMI), carotid-femoral pulse wave velocity (cf-PWV), carotid intima-media thickness (IMT), ankle-brachial index (ABI) and urine albumin to creatinine ratio (UACR) was evaluated for each participant. Participant's eGFR was calculated from the Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), Chinese-abbreviated MDRD (c-aMDRD), Asian-modified CKD-EPI (aCKD-EPI) equation and Chinese-modified CKD-EPI (cCKD-EPI) equation. In multivariate regression analysis, only eGFRs from aCKD-EPI were significantly and inversely associated with carotid IMT ( P =0.005). In multivariate logistic models, decreased eGFR from all the equations were significantly associated with lower ABI ( P <0.001), microalbuminuria ( P =0.02 to P <0.001) and increased cf-PWV ( P <0.001). Only decreased eGFRs from aCKD-EPI and cCKD-EPI equations were significantly associated with increased IMT (both crude P <0.05). In the receiver operator characteristic (ROC) analysis, only aCKD-EPI and cCKD-EPI equations presented significant associations with all the listed preclinical TODs ( P -value from <0.05 to <0.001). In community-dwelling elderly Chinese, eGFRs from aCKD-EPI and cCKD-EPI equations are better associated with preclinical TOD. aCKD-EPI and cCKD-EPI equations should be preferred when making risk assessment.

  16. Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel

    2010-09-01

    A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.

  17. Enhancement of the Logistics Battle Command Model: Architecture Upgrades and Attrition Module Development

    DTIC Science & Technology

    2017-01-05

    module. 15. SUBJECT TERMS Logistics, attrition, discrete event simulation, Simkit, LBC 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...stochastics, and discrete event model programmed in Java building largely on the Simkit library. The primary purpose of the LBC model is to support...equations makes them incompatible with the discrete event construct of LBC. Bullard further advances this methodology by developing a stochastic

  18. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  19. Evaluation of equations that estimate glomerular filtration rate in renal transplant recipients.

    PubMed

    De Alencastro, M G; Veronese, F V; Vicari, A R; Gonçalves, L F; Manfro, R C

    2014-03-01

    The accuracy of equations that estimate the glomerular filtration rate (GFR) in renal transplant patients has not been established; thus their performance was assessed in stable renal transplant patients. Renal transplant patients (N.=213) with stable graft function were enrolled. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation was used as the reference method and compared with the Cockcroft-Gault (CG), Modification of Diet in Renal Disease (MDRD), Mayo Clinic (MC) and Nankivell equations. Bias, accuracy and concordance rates were determined for all equation relative to CKD-EPI. Mean estimated GFR values of the equations differed significantly from the CKD-EPI values, though the correlations with the reference method were significant. Values of MDRD differed from the CG, MC and Nankivell estimations. The best agreement to classify the chronic kidney disease (CKD) stages was for the MDRD (Kappa=0.649, P<0.001), and for the other equations the agreement was moderate. The MDRD had less bias and narrower agreement limits but underestimated the GFR at levels above 60 mL/min/1.73 m2. Conversely, the CG, MC and Nankivell equations overestimated the GFR, and the Nankivell equation had the worst performance. The MDRD equation P15 and P30 values were higher than those of the other equations (P<0.001). Despite their correlations, equations estimated the GFR and CKD stage differently. The MDRD equation was the most accurate, but the sub-optimal performance of all the equations precludes their accurate use in clinical practice.

  20. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  1. The joint effects of risk status, gender, early literacy and cognitive skills on the presence of dyslexia among a group of high-risk Chinese children.

    PubMed

    Wong, Simpson W L; McBride-Chang, Catherine; Lam, Catherine; Chan, Becky; Lam, Fanny W F; Doo, Sylvia

    2012-02-01

    This study sought to examine factors that are predictive of future developmental dyslexia among a group of 5-year-old Chinese children at risk for dyslexia, including 62 children with a sibling who had been previously diagnosed with dyslexia and 52 children who manifested clinical at-risk factors in aspects of language according to testing by paediatricians. The age-5 performances on various literacy and cognitive tasks, gender and group status (familial risk or language delayed) were used to predict developmental dyslexia 2 years later using logistic regression analysis. Results showed that greater risk of dyslexia was related to slower rapid automatized naming, lower scores on morphological awareness, Chinese character recognition and English letter naming, and gender (boys had more risk). Three logistic equations were generated for estimating individual risk of dyslexia. The strongest models were those that included all print-related variables (including speeded number naming, character recognition and letter identification) and gender, with about 70% accuracy or above. Early identification of those Chinese children at risk for dyslexia can facilitate better dyslexia risk management. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Stabilization in a two-species chemotaxis system with a logistic source

    NASA Astrophysics Data System (ADS)

    Tello, J. I.; Winkler, M.

    2012-05-01

    We study a system of three partial differential equations modelling the spatio-temporal behaviour of two competitive populations of biological species both of which are attracted chemotactically by the same signal substance. More precisely, we consider the initial-boundary value problem for \\[ \\begin{equation*} \\fl\\left\\{ \\begin{array}{@{}l} u_t= d_1\\Delta u - \\chi_1 \

  3. Equations for predicting uncompacted crown ratio based on compacted crown ratio and tree attributes.

    Treesearch

    Vicente J. Monleon; David Azuma; Donald Gedney

    2004-01-01

    Equations to predict uncompacted crown ratio as a function of compacted crown ratio, tree diameter, and tree height are developed for the main tree species in Oregon, Washington, and California using data from the Forest Health Monitoring Program, USDA Forest Service. The uncompacted crown ratio was modeled with a logistic function and fitted using weighted, nonlinear...

  4. Stable boundary conditions and difference schemes for Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Dutt, P.

    1985-01-01

    The Navier-Stokes equations can be viewed as an incompletely elliptic perturbation of the Euler equations. By using the entropy function for the Euler equations as a measure of energy for the Navier-Stokes equations, it was possible to obtain nonlinear energy estimates for the mixed initial boundary value problem. These estimates are used to derive boundary conditions which guarantee L2 boundedness even when the Reynolds number tends to infinity. Finally, a new difference scheme for modelling the Navier-Stokes equations in multidimensions for which it is possible to obtain discrete energy estimates exactly analogous to those we obtained for the differential equation was proposed.

  5. Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.

    PubMed

    Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie

    2018-05-01

    Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  6. Regularity estimates up to the boundary for elliptic systems of difference equations

    NASA Technical Reports Server (NTRS)

    Strikwerda, J. C.; Wade, B. A.; Bube, K. P.

    1986-01-01

    Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.

  7. Structural Definition and Mass Estimation of Lunar Surface Habitats for the Lunar Architecture Team Phase 2 (LAT-2) Study

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Wu, K, Chauncey; Smith, Russell W.

    2008-01-01

    The Lunar Architecture Team Phase 2 study defined and assessed architecture options for a Lunar Outpost at the Moon's South Pole. The Habitation Focus Element Team was responsible for developing concepts for all of the Habitats and pressurized logistics modules particular to each of the architectures, and defined the shapes, volumes and internal layouts considering human factors, surface operations and safety requirements, as well as Lander mass and volume constraints. The Structures Subsystem Team developed structural concepts, sizing estimates and mass estimates for the primary Habitat structure. In these studies, the primary structure was decomposed into a more detailed list of components to be sized to gain greater insight into concept mass contributors. Structural mass estimates were developed that captured the effect of major design parameters such as internal pressure load. Analytical and empirical equations were developed for each structural component identified. Over 20 different hard-shell, hybrid expandable and inflatable soft-shell Habitat and pressurized logistics module concepts were sized and compared to assess structural performance and efficiency during the study. Habitats were developed in three categories; Mini Habs that are removed from the Lander and placed on the Lunar surface, Monolithic habitats that remain on the Lander, and Habitats that are part of the Mobile Lander system. Each category of Habitat resulted in structural concepts with advantages and disadvantages. The same modular shell components could be used for the Mini Hab concept, maximizing commonality and minimizing development costs. Larger Habitats had higher volumetric mass efficiency and floor area than smaller Habitats (whose mass was dominated by fixed items such as domes and frames). Hybrid and pure expandable Habitat structures were very mass-efficient, but the structures technology is less mature, and the ability to efficiently package and deploy internal subsystems remains an open issue.

  8. The applicability of eGFR equations to different populations.

    PubMed

    Delanaye, Pierre; Mariat, Christophe

    2013-09-01

    The Cockcroft-Gault equation for estimating glomerular filtration rate has been learnt by every generation of medical students over the decades. Since the publication of the Modification of Diet in Renal Disease (MDRD) study equation in 1999, however, the supremacy of the Cockcroft-Gault equation has been relentlessly disputed. More recently, the Chronic Kidney Disease Epidemiology (CKD-EPI) consortium has proposed a group of novel equations for estimating glomerular filtration rate (GFR). The MDRD and CKD-EPI equations were developed following a rigorous process, are expressed in a way in which they can be used with standardized biomarkers of GFR (serum creatinine and/or serum cystatin C) and have been evaluated in different populations of patients. Today, the MDRD Study equation and the CKD-EPI equation based on serum creatinine level have supplanted the Cockcroft-Gault equation. In many regards, these equations are superior to the Cockcroft-Gault equation and are now specifically recommended by international guidelines. With their generalized use, however, it has become apparent that those equations are not infallible and that they fail to provide an accurate estimate of GFR in certain situations frequently encountered in clinical practice. After describing the processes that led to the development of the new GFR-estimating equations, this Review discusses the clinical situations in which the applicability of these equations is questioned.

  9. Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2003-01-01

    Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.

  10. A Bayesian goodness of fit test and semiparametric generalization of logistic regression with measurement data.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E

    2013-06-01

    Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.

  11. Control of nonlinear systems using periodic parametric perturbations with application to a reversed field pinch

    NASA Astrophysics Data System (ADS)

    Mirus, Kevin Andrew

    In this thesis, the possibility of controlling low- and high-dimensional chaotic systems by periodically driving an accessible system parameter is examined. This method has been carried out on several numerical systems and the MST Reversed Field Pinch. The numerical systems investigated include the logistic equation, the Lorenz equations, the Rossler equations, a coupled lattice of logistic equations, a coupled lattice of Lorenz equations, the Yoshida equations, which model tearing mode fluctuations in a plasma, and a neural net model for magnetic fluctuations on MST. This method was tested on the MST by sinusoidally driving a magnetic flux through the toroidal gap of the device. Numerically, periodic drives were found to be most effective at producing limit cycle behavior or significantly reducing the dimension of the system when the perturbation frequency was near natural frequencies of unstable periodic orbits embedded in the attractor of the unperturbed system. Several different unstable periodic orbits have been stabilized in this way for the low-dimensional numerical systems, sometimes with perturbation amplitudes that were less than 5% of the nominal value of the parameter being perturbed. In high- dimensional systems, limit cycle behavior and significant decreases in the system dimension were also achieved using perturbations with frequencies near the natural unstable periodic orbit frequencies. Results for the MST were not this encouraging, most likely because of an insufficient drive amplitude, the extremely high dimension of the plasma behavior, large amounts of noise, and a lack of stationarity in the transient plasma pulses.

  12. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  13. Aneuploidy theory explains tumor formation, the absence of immune surveillance, and the failure of chemotherapy.

    PubMed

    Rasnick, David

    2002-07-01

    The autocatalyzed progression of aneuploidy accounts for all cancer-specific phenotypes, the Hayflick limit of cultured cells, carcinogen-induced tumors in mice, the age distribution of human cancer, and multidrug-resistance. Here aneuploidy theory addresses tumor formation. The logistic equation, phi(n)(+1) = rphi(n) (1 - phi(n)), models the autocatalyzed progression of aneuploidy in vivo and in vitro. The variable phi(n)(+1) is the average aneuploid fraction of a population of cells at the n+1 cell division and is determined by the value at the nth cell division. The value r is the growth control parameter. The logistic equation was used to compute the probability distribution for values of phi after numerous divisions of aneuploid cells. The autocatalyzed progression of aneuploidy follows the laws of deterministic chaos, which means that certain values of phi are more probable than others. The probability map of the logistic equation shows that: 1) an aneuploid fraction of at least 0.30 is necessary to sustain a population of cancer cells; and 2) the most likely aneuploid fraction after many population doublings is 0.70, which is equivalent to a DNA(index)=1.7, the point of maximum disorder of the genome that still sustains life. Aneuploidy theory also explains the lack of immune surveillance and the failure of chemotherapy.

  14. Derivation of a Provisional, Age-dependent, AIS2+ Thoracic Risk Curve for the THOR50 Test Dummy via Integration of NASS Cases, PMHS Tests, and Simulation Data.

    PubMed

    Laituri, Tony R; Henry, Scott; El-Jawahri, Raed; Muralidharan, Nirmal; Li, Guosong; Nutt, Marvin

    2015-11-01

    A provisional, age-dependent thoracic risk equation (or, "risk curve") was derived to estimate moderate-to-fatal injury potential (AIS2+), pertaining to men with responses gaged by the advanced mid-sized male test dummy (THOR50). The derivation involved two distinct data sources: cases from real-world crashes (e.g., the National Automotive Sampling System, NASS) and cases involving post-mortem human subjects (PMHS). The derivation was therefore more comprehensive, as NASS datasets generally skew towards younger occupants, and PMHS datasets generally skew towards older occupants. However, known deficiencies had to be addressed (e.g., the NASS cases had unknown stimuli, and the PMHS tests required transformation of known stimuli into THOR50 stimuli). For the NASS portion of the analysis, chest-injury outcomes for adult male drivers about the size of the THOR50 were collected from real-world, 11-1 o'clock, full-engagement frontal crashes (NASS, 1995-2012 calendar years, 1985-2012 model-year light passenger vehicles). The screening for THOR50-sized men involved application of a set of newly-derived "correction" equations for self-reported height and weight data in NASS. Finally, THOR50 stimuli were estimated via field simulations involving attendant representative restraint systems, and those stimuli were then assigned to corresponding NASS cases (n=508). For the PMHS portion of the analysis, simulation-based closure equations were developed to convert PMHS stimuli into THOR50 stimuli. Specifically, closure equations were derived for the four measurement locations on the THOR50 chest by cross-correlating the results of matched-loading simulations between the test dummy and the age-dependent, Ford Human Body Model. The resulting closure equations demonstrated acceptable fidelity (n=75 matched simulations, R2≥0.99). These equations were applied to the THOR50-sized men in the PMHS dataset (n=20). The NASS and PMHS datasets were combined and subjected to survival analysis with event-frequency weighting and arbitrary censoring. The resulting risk curve--a function of peak THOR50 chest compression and age--demonstrated acceptable fidelity for recovering the AIS2+ chest injury rate of the combined dataset (i.e., IR_dataset=1.97% vs. curve-based IR_dataset=1.98%). Additional sensitivity analyses showed that (a) binary logistic regression yielded a risk curve with nearly-identical fidelity, (b) there was only a slight advantage of combining the small-sample PMHS dataset with the large-sample NASS dataset, (c) use of the PMHS-based risk curve for risk estimation of the combined dataset yielded relatively poor performance (194% difference), and (d) when controlling for the type of contact (lab-consistent or not), the resulting risk curves were similar.

  15. Logistic regression analysis to predict Medical Licensing Examination of Thailand (MLET) Step1 success or failure.

    PubMed

    Wanvarie, Samkaew; Sathapatayavongs, Boonmee

    2007-09-01

    The aim of this paper was to assess factors that predict students' performance in the Medical Licensing Examination of Thailand (MLET) Step1 examination. The hypothesis was that demographic factors and academic records would predict the students' performance in the Step1 Licensing Examination. A logistic regression analysis of demographic factors (age, sex and residence) and academic records [high school grade point average (GPA), National University Entrance Examination Score and GPAs of the pre-clinical years] with the MLET Step1 outcome was accomplished using the data of 117 third-year Ramathibodi medical students. Twenty-three (19.7%) students failed the MLET Step1 examination. Stepwise logistic regression analysis showed that the significant predictors of MLET Step1 success/failure were residence background and GPAs of the second and third preclinical years. For students whose sophomore and third-year GPAs increased by an average of 1 point, the odds of passing the MLET Step1 examination increased by a factor of 16.3 and 12.8 respectively. The minimum GPAs for students from urban and rural backgrounds to pass the examination were estimated from the equation (2.35 vs 2.65 from 4.00 scale). Students from rural backgrounds and/or low-grade point averages in their second and third preclinical years of medical school are at risk of failing the MLET Step1 examination. They should be given intensive tutorials during the second and third pre-clinical years.

  16. Modeling time-location patterns of inner-city high school students in New York and Los Angeles using a longitudinal approach with generalized estimating equations.

    PubMed

    Decastro, B Rey; Sax, Sonja N; Chillrud, Steven N; Kinney, Patrick L; Spengler, John D

    2007-05-01

    The TEACH Project obtained subjects' time-location information as part of its assessment of personal exposures to air toxics for high school students in two major urban areas. This report uses a longitudinal modeling approach to characterize the association between demographic and temporal predictors and the subjects' time-location behavior for three microenvironments--indoor-home, indoor-school, and outdoors. Such a longitudinal approach has not, to the knowledge of the authors, been previously applied to time-location data. Subjects were 14- to 19-year-old, self reported non-smokers, and were recruited from high schools in New York, NY (31 subjects: nine male, 22 female) and Los Angeles, CA (31 subjects: eight male, 23 female). Subjects reported their time-location in structured 24-h diaries with 15-min intervals for three consecutive weekdays in each of winter and summer-fall seasons in New York and Los Angeles during 1999-2000. The data set contained 15,009 observations. A longitudinal logistic regression model was run for each microenvironment where the binary outcome indicated the subject's presence in a microenvironment during a 15-min period. The generalized estimating equation (GEE) technique with alternating logistic regressions was used to account for the correlation of observations within each subject. The multivariate models revealed complex time-location patterns, with subjects predominantly in the indoor-home microenvironment, but also with a clear influence of the school schedule. The models also found that a subject's presence in a particular microenvironment may be significantly positively correlated for as long as 45 min before the current observation. Demographic variables were also predictive of time-location behavior: for the indoor-home microenvironment, having an after school job (OR=0.67 [95% confidence interval: 0.54:0.85]); for indoor-school, living in New York (0.42 [0.29:0.59]); and for outdoor, being 16-year-old (0.80 [0.67:0.96]), 17-year-old (0.71 [0.54:0.92]), and having an after school job (1.29 [1.07:1.56]).

  17. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  18. Impact of the biorefinery size on the logistics of corn stover supply – A scenario analysis

    DOE PAGES

    Wang, Yu; Ebadian, Mahmood; Sokhansanj, Shahab; ...

    2017-03-23

    In this study, three scenarios are considered to quantify the impact of the biorefinery size on the required biomass logistical resources. The biorefinery scenarios include small scale (175 dt/day)-SS, medium scale (520 dt/day)-MS and large scale (860 dt/day)-LS. These scenarios are compared against the following logistical resources (1) harvest area and contracted fields, (2) logistics equipment fleet and the workforce to run this fleet and (3) intermediate storage sites and their biomass inventory levels. To this end, the IBSAL-MC simulation model is applied to a corn stover logistics system in Southwestern Ontario. The obtained results show (1) the harvest areamore » and the number of contracted fields increase by 65% and 78% from the SS scenario to the MS and LS scenarios, respectively, (2) the average biomass delivered costs are estimated to be $82.09, $87.49 and $93.75/dry tonne in the SS, MS and LS scenarios. The increase in the capital costs to develop a dedicated logistics equipment fleet are estimated to be far greater than the increase in the delivered costs as the size of the biorefinery increases. The upfront capital costs are estimated to be 6.72 dollars, 21.83 and 35.51 million in these scenarios. To run the logistics equipment fleet efficiently, 37, 136 and 235 well-trained operators are required in the SS, MS ad LS scenarios, respectively, and (3) the inventory level and the land requirement for storage in the MS and LS scenarios are estimated to be 225% and 425% greater than those of the SS scenario. The sensitivity analysis indicates that the logistical resources are highly sensitive to corn yield and farm participation rate. Overall, this study shows the importance of considering the size of the required logistical resources and the associated level of logistical complexity in evaluating the economic viability of a biorefinery project.« less

  19. Impact of the biorefinery size on the logistics of corn stover supply – A scenario analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yu; Ebadian, Mahmood; Sokhansanj, Shahab

    In this study, three scenarios are considered to quantify the impact of the biorefinery size on the required biomass logistical resources. The biorefinery scenarios include small scale (175 dt/day)-SS, medium scale (520 dt/day)-MS and large scale (860 dt/day)-LS. These scenarios are compared against the following logistical resources (1) harvest area and contracted fields, (2) logistics equipment fleet and the workforce to run this fleet and (3) intermediate storage sites and their biomass inventory levels. To this end, the IBSAL-MC simulation model is applied to a corn stover logistics system in Southwestern Ontario. The obtained results show (1) the harvest areamore » and the number of contracted fields increase by 65% and 78% from the SS scenario to the MS and LS scenarios, respectively, (2) the average biomass delivered costs are estimated to be $82.09, $87.49 and $93.75/dry tonne in the SS, MS and LS scenarios. The increase in the capital costs to develop a dedicated logistics equipment fleet are estimated to be far greater than the increase in the delivered costs as the size of the biorefinery increases. The upfront capital costs are estimated to be 6.72 dollars, 21.83 and 35.51 million in these scenarios. To run the logistics equipment fleet efficiently, 37, 136 and 235 well-trained operators are required in the SS, MS ad LS scenarios, respectively, and (3) the inventory level and the land requirement for storage in the MS and LS scenarios are estimated to be 225% and 425% greater than those of the SS scenario. The sensitivity analysis indicates that the logistical resources are highly sensitive to corn yield and farm participation rate. Overall, this study shows the importance of considering the size of the required logistical resources and the associated level of logistical complexity in evaluating the economic viability of a biorefinery project.« less

  20. Do group-specific equations provide the best estimates of stature?

    PubMed

    Albanese, John; Osley, Stephanie E; Tuck, Andrew

    2016-04-01

    An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Stature estimation equations for South Asian skeletons based on DXA scans of contemporary adults.

    PubMed

    Pomeroy, Emma; Mushrif-Tripathy, Veena; Wells, Jonathan C K; Kulkarni, Bharati; Kinra, Sanjay; Stock, Jay T

    2018-05-03

    Stature estimation from the skeleton is a classic anthropological problem, and recent years have seen the proliferation of population-specific regression equations. Many rely on the anatomical reconstruction of stature from archaeological skeletons to derive regression equations based on long bone lengths, but this requires a collection with very good preservation. In some regions, for example, South Asia, typical environmental conditions preclude the sufficient preservation of skeletal remains. Large-scale epidemiological studies that include medical imaging of the skeleton by techniques such as dual-energy X-ray absorptiometry (DXA) offer new potential datasets for developing such equations. We derived estimation equations based on known height and bone lengths measured from DXA scans from the Andhra Pradesh Children and Parents Study (Hyderabad, India). Given debates on the most appropriate regression model to use, multiple methods were compared, and the performance of the equations was tested on a published skeletal dataset of individuals with known stature. The equations have standard errors of estimates and prediction errors similar to those derived using anatomical reconstruction or from cadaveric datasets. As measured by the number of significant differences between true and estimated stature, and the prediction errors, the new equations perform as well as, and generally better than, published equations commonly used on South Asian skeletons or based on Indian cadaveric datasets. This study demonstrates the utility of DXA scans as a data source for developing stature estimation equations and offer a new set of equations for use with South Asian datasets. © 2018 Wiley Periodicals, Inc.

  2. Estimating time-varying exposure-outcome associations using case-control data: logistic and case-cohort analyses.

    PubMed

    Keogh, Ruth H; Mangtani, Punam; Rodrigues, Laura; Nguipdop Djomo, Patrick

    2016-01-05

    Traditional analyses of standard case-control studies using logistic regression do not allow estimation of time-varying associations between exposures and the outcome. We present two approaches which allow this. The motivation is a study of vaccine efficacy as a function of time since vaccination. Our first approach is to estimate time-varying exposure-outcome associations by fitting a series of logistic regressions within successive time periods, reusing controls across periods. Our second approach treats the case-control sample as a case-cohort study, with the controls forming the subcohort. In the case-cohort analysis, controls contribute information at all times they are at risk. Extensions allow left truncation, frequency matching and, using the case-cohort analysis, time-varying exposures. Simulations are used to investigate the methods. The simulation results show that both methods give correct estimates of time-varying effects of exposures using standard case-control data. Using the logistic approach there are efficiency gains by reusing controls over time and care should be taken over the definition of controls within time periods. However, using the case-cohort analysis there is no ambiguity over the definition of controls. The performance of the two analyses is very similar when controls are used most efficiently under the logistic approach. Using our methods, case-control studies can be used to estimate time-varying exposure-outcome associations where they may not previously have been considered. The case-cohort analysis has several advantages, including that it allows estimation of time-varying associations as a continuous function of time, while the logistic regression approach is restricted to assuming a step function form for the time-varying association.

  3. Robust logistic regression to narrow down the winner's curse for rare and recessive susceptibility variants.

    PubMed

    Kesselmeier, Miriam; Lorenzo Bermejo, Justo

    2017-11-01

    Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients

    PubMed Central

    Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong

    2015-01-01

    ♦ Objectives: To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Methods: Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ Results: The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Conclusions: Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. PMID:26293839

  5. Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients.

    PubMed

    Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong

    2015-12-01

    ♦ To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. Copyright © 2015 International Society for Peritoneal Dialysis.

  6. One parameter family of master equations for logistic growth and BCM theory

    NASA Astrophysics Data System (ADS)

    De Oliveira, L. R.; Castellani, C.; Turchetti, G.

    2015-02-01

    We propose a one parameter family of master equations, for the evolution of a population, having the logistic equation as mean field limit. The parameter α determines the relative weight of linear versus nonlinear terms in the population number n ⩽ N entering the loss term. By varying α from 0 to 1 the equilibrium distribution changes from maximum growth to almost extinction. The former is a Gaussian centered at n = N, the latter is a power law peaked at n = 1. A bimodal distribution is observed in the transition region. When N grows and tends to ∞, keeping the value of α fixed, the distribution tends to a Gaussian centered at n = N whose limit is a delta function corresponding to the stable equilibrium of the mean field equation. The choice of the master equation in this family depends on the equilibrium distribution for finite values of N. The presence of an absorbing state for n = 0 does not change this picture since the extinction mean time grows exponentially fast with N. As a consequence for α close to zero extinction is not observed, whereas when α approaches 1 the relaxation to a power law is observed before extinction occurs. We extend this approach to a well known model of synaptic plasticity, the so called BCM theory in the case of a single neuron with one or two synapses.

  7. A critical review and database of biomass and volume allometric equation for trees and shrubs of Bangladesh

    NASA Astrophysics Data System (ADS)

    Mahmood, H.; Siddique, M. R. H.; Akhter, M.

    2016-08-01

    Estimations of biomass, volume and carbon stock are important in the decision making process for the sustainable management of a forest. These estimations can be conducted by using available allometric equations of biomass and volume. Present study aims to: i. develop a compilation with verified allometric equations of biomass, volume, and carbon for trees and shrubs of Bangladesh, ii. find out the gaps and scope for further development of allometric equations for different trees and shrubs of Bangladesh. Key stakeholders (government departments, research organizations, academic institutions, and potential individual researchers) were identified considering their involvement in use and development of allometric equations. A list of documents containing allometric equations was prepared from secondary sources. The documents were collected, examined, and sorted to avoid repetition, yielding 50 documents. These equations were tested through a quality control scheme involving operational verification, conceptual verification, applicability, and statistical credibility. A total of 517 allometric equations for 80 species of trees, shrubs, palm, and bamboo were recorded. In addition, 222 allometric equations for 39 species were validated through the quality control scheme. Among the verified equations, 20%, 12% and 62% of equations were for green-biomass, oven-dried biomass, and volume respectively and 4 tree species contributed 37% of the total verified equations. Five gaps have been pinpointed for the existing allometric equations of Bangladesh: a. little work on allometric equation of common tree and shrub species, b. most of the works were concentrated on certain species, c. very little proportion of allometric equations for biomass estimation, d. no allometric equation for belowground biomass and carbon estimation, and d. lower proportion of valid allometric equations. It is recommended that site and species specific allometric equations should be developed and consistency in field sampling, sample processing, data recording and selection of allometric equations should be maintained to ensure accuracy in estimation of biomass, volume, and carbon stock in different forest types of Bangladesh.

  8. Equations relating compacted and uncompacted live crown ratio for common tree species in the South

    Treesearch

    KaDonna C. Randolph

    2010-01-01

    Species-specific equations to predict uncompacted crown ratio (UNCR) from compacted live crown ratio (CCR), tree length, and stem diameter were developed for 24 species and 12 genera in the southern United States. Using data from the US Forest Service Forest Inventory and Analysis program, nonlinear regression was used to model UNCR with a logistic function. Model...

  9. New robust statistical procedures for the polytomous logistic regression models.

    PubMed

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  10. Technique for estimating depth of floods in Tennessee

    USGS Publications Warehouse

    Gamble, C.R.

    1983-01-01

    Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)

  11. MARSnet: Mission-aware Autonomous Radar Sensor Network for Future Combat Systems

    DTIC Science & Technology

    2007-05-03

    34Parameter estimation for 3-parameter log-logistic distribution (LLD3) by Porne ", Parameter estimation for 3-parameter log-logistic distribu- tion...section V we physical security, air traffic control, traffic monitoring, andvidefaconu s cribedy. video surveillance, industrial automation etc. Each

  12. On the use and misuse of scalar scores of confounders in design and analysis of observational studies.

    PubMed

    Pfeiffer, R M; Riedl, R

    2015-08-15

    We assess the asymptotic bias of estimates of exposure effects conditional on covariates when summary scores of confounders, instead of the confounders themselves, are used to analyze observational data. First, we study regression models for cohort data that are adjusted for summary scores. Second, we derive the asymptotic bias for case-control studies when cases and controls are matched on a summary score, and then analyzed either using conditional logistic regression or by unconditional logistic regression adjusted for the summary score. Two scores, the propensity score (PS) and the disease risk score (DRS) are studied in detail. For cohort analysis, when regression models are adjusted for the PS, the estimated conditional treatment effect is unbiased only for linear models, or at the null for non-linear models. Adjustment of cohort data for DRS yields unbiased estimates only for linear regression; all other estimates of exposure effects are biased. Matching cases and controls on DRS and analyzing them using conditional logistic regression yields unbiased estimates of exposure effect, whereas adjusting for the DRS in unconditional logistic regression yields biased estimates, even under the null hypothesis of no association. Matching cases and controls on the PS yield unbiased estimates only under the null for both conditional and unconditional logistic regression, adjusted for the PS. We study the bias for various confounding scenarios and compare our asymptotic results with those from simulations with limited sample sizes. To create realistic correlations among multiple confounders, we also based simulations on a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  13. p-Euler equations and p-Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Li, Lei; Liu, Jian-Guo

    2018-04-01

    We propose in this work new systems of equations which we call p-Euler equations and p-Navier-Stokes equations. p-Euler equations are derived as the Euler-Lagrange equations for the action represented by the Benamou-Brenier characterization of Wasserstein-p distances, with incompressibility constraint. p-Euler equations have similar structures with the usual Euler equations but the 'momentum' is the signed (p - 1)-th power of the velocity. In the 2D case, the p-Euler equations have streamfunction-vorticity formulation, where the vorticity is given by the p-Laplacian of the streamfunction. By adding diffusion presented by γ-Laplacian of the velocity, we obtain what we call p-Navier-Stokes equations. If γ = p, the a priori energy estimates for the velocity and momentum have dual symmetries. Using these energy estimates and a time-shift estimate, we show the global existence of weak solutions for the p-Navier-Stokes equations in Rd for γ = p and p ≥ d ≥ 2 through a compactness criterion.

  14. An economic evaluation of the controlled temperature chain approach for vaccine logistics: evidence from a study conducted during a meningitis A vaccine campaign in Togo.

    PubMed

    Mvundura, Mercy; Lydon, Patrick; Gueye, Abdoulaye; Diaw, Ibnou Khadim; Landoh, Dadja Essoya; Toi, Bafei; Kahn, Anna-Lea; Kristensen, Debra

    2017-01-01

    A recent innovation in support of the final segment of the immunization supply chain is licensing certain vaccines for use in a controlled temperature chain (CTC), which allows excursions into ambient temperatures up to 40°C for a specific number of days immediately prior to administration. However, limited evidence exists on CTC economics to inform investments for labeling other eligible vaccines for CTC use. Using data collected during a MenAfriVac™ campaign in Togo, we estimated economic costs for vaccine logistics when using the CTC approach compared to full cold chain logistics (CCL) approach. We conducted the study in Togo's Central Region, where two districts were using the CTC approach and two relied on a fullCCL approach during the MenAfriVac™ campaign. Data to estimate vaccine logistics costs were obtained from primary data collected using costing questionnaires and from financial cost data from campaign microplans. Costs are presented in 2014 US dollars. Average logistics costs per dose were estimated at $0.026±0.032 for facilities using a CTC and $0.029±0.054 for facilities using the fullCCL approach, but the two estimates were not statistically different. However, if the facilities without refrigerators had not used a CTC but had received daily deliveries of vaccines, the average cost per dose would have increased to $0.063 (range $0.007 to $0.33), with larger logistics cost increases occurring for facilities that were far from the district. Using the CTC approach can reduce logistics costs for remote facilities without cold chain infrastructure, which is where CTC is designed to reduce logistical challenges of vaccine distribution.

  15. An economic evaluation of the controlled temperature chain approach for vaccine logistics: evidence from a study conducted during a meningitis A vaccine campaign in Togo

    PubMed Central

    Mvundura, Mercy; Lydon, Patrick; Gueye, Abdoulaye; Diaw, Ibnou Khadim; Landoh, Dadja Essoya; Toi, Bafei; Kahn, Anna-Lea; Kristensen, Debra

    2017-01-01

    Introduction A recent innovation in support of the final segment of the immunization supply chain is licensing certain vaccines for use in a controlled temperature chain (CTC), which allows excursions into ambient temperatures up to 40°C for a specific number of days immediately prior to administration. However, limited evidence exists on CTC economics to inform investments for labeling other eligible vaccines for CTC use. Using data collected during a MenAfriVac™ campaign in Togo, we estimated economic costs for vaccine logistics when using the CTC approach compared to full cold chain logistics (CCL) approach. Methods We conducted the study in Togo’s Central Region, where two districts were using the CTC approach and two relied on a fullCCL approach during the MenAfriVac™ campaign. Data to estimate vaccine logistics costs were obtained from primary data collected using costing questionnaires and from financial cost data from campaign microplans. Costs are presented in 2014 US dollars. Results Average logistics costs per dose were estimated at $0.026±0.032 for facilities using a CTC and $0.029±0.054 for facilities using the fullCCL approach, but the two estimates were not statistically different. However, if the facilities without refrigerators had not used a CTC but had received daily deliveries of vaccines, the average cost per dose would have increased to $0.063 (range $0.007 to $0.33), with larger logistics cost increases occurring for facilities that were far from the district. Conclusion Using the CTC approach can reduce logistics costs for remote facilities without cold chain infrastructure, which is where CTC is designed to reduce logistical challenges of vaccine distribution. PMID:29296162

  16. Methods for estimating selected spring and fall low-flow frequency statistics for ungaged stream sites in Iowa, based on data through June 2014

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.

    2016-09-19

    A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged site are provided. StreamStats also allows users to click on any Iowa streamgage to obtain computed estimates for the six selected spring and fall low-flow statistics.

  17. Risk estimation using probability machines

    PubMed Central

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  18. Risk estimation using probability machines.

    PubMed

    Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D

    2014-03-01

    Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.

  19. Regional Regression Equations to Estimate Flow-Duration Statistics at Ungaged Stream Sites in Connecticut

    USGS Publications Warehouse

    Ahearn, Elizabeth A.

    2010-01-01

    Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.

  20. Estimating population salt intake in India using spot urine samples.

    PubMed

    Petersen, Kristina S; Johnson, Claire; Mohan, Sailesh; Rogers, Kris; Shivashankar, Roopa; Thout, Sudhir Raj; Gupta, Priti; He, Feng J; MacGregor, Graham A; Webster, Jacqui; Santos, Joseph Alvin; Krishnan, Anand; Maulik, Pallab K; Reddy, K Srinath; Gupta, Ruby; Prabhakaran, Dorairaj; Neal, Bruce

    2017-11-01

    To compare estimates of mean population salt intake in North and South India derived from spot urine samples versus 24-h urine collections. In a cross-sectional survey, participants were sampled from slum, urban and rural communities in North and in South India. Participants provided 24-h urine collections, and random morning spot urine samples. Salt intake was estimated from the spot urine samples using a series of established estimating equations. Salt intake data from the 24-h urine collections and spot urine equations were weighted to provide estimates of salt intake for Delhi and Haryana, and Andhra Pradesh. A total of 957 individuals provided a complete 24-h urine collection and a spot urine sample. Weighted mean salt intake based on the 24-h urine collection, was 8.59 (95% confidence interval 7.73-9.45) and 9.46 g/day (8.95-9.96) in Delhi and Haryana, and Andhra Pradesh, respectively. Corresponding estimates based on the Tanaka equation [9.04 (8.63-9.45) and 9.79 g/day (9.62-9.96) for Delhi and Haryana, and Andhra Pradesh, respectively], the Mage equation [8.80 (7.67-9.94) and 10.19 g/day (95% CI 9.59-10.79)], the INTERSALT equation [7.99 (7.61-8.37) and 8.64 g/day (8.04-9.23)] and the INTERSALT equation with potassium [8.13 (7.74-8.52) and 8.81 g/day (8.16-9.46)] were all within 1 g/day of the estimate based upon 24-h collections. For the Toft equation, estimates were 1-2 g/day higher [9.94 (9.24-10.64) and 10.69 g/day (9.44-11.93)] and for the Kawasaki equation they were 3-4 g/day higher [12.14 (11.30-12.97) and 13.64 g/day (13.15-14.12)]. In urban and rural areas in North and South India, most spot urine-based equations provided reasonable estimates of mean population salt intake. Equations that did not provide good estimates may have failed because specimen collection was not aligned with the original method.

  1. The use of bioelectrical impedance analysis to estimate total body water in young children with cerebral palsy.

    PubMed

    Bell, Kristie L; Boyd, Roslyn N; Walker, Jacqueline L; Stevenson, Richard D; Davies, Peter S W

    2013-08-01

    Body composition assessment is an essential component of nutritional evaluation in children with cerebral palsy. This study aimed to validate bioelectrical impedance to estimate total body water in young children with cerebral palsy and determine best electrode placement in unilateral impairment. 55 young children with cerebral palsy across all functional ability levels were included. Height/length was measured or estimated from knee height. Total body water was estimated using a Bodystat 1500MDD and three equations, and measured using the gold standard, deuterium dilution technique. Comparisons were made using Bland Altman analysis. For children with bilateral impairment, the Fjeld equation estimated total body water with the least bias (limits of agreement): 0.0 L (-1.4 L to 1.5 L); the Pencharz equation produced the greatest: 2.7 L (0.6 L-4.8 L). For children with unilateral impairment, differences between measured and estimated total body water were lowest on the unimpaired side using the Fjeld equation 0.1 L (-1.5 L to 1.6 L)) and greatest for the Pencharz equation. The ability of bioelectrical impedance to estimate total body water depends on the equation chosen. The Fjeld equation was the most accurate for the group, however, individual results varied by up to 18%. A population specific equation was developed and may enhance the accuracy of estimates. Australian New Zealand Clinical Trials Registry (ANZCTR) number: ACTRN12611000616976. Copyright © 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  2. On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis

    ERIC Educational Resources Information Center

    Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas

    2011-01-01

    The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…

  3. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    PubMed

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  4. 48 CFR 715.370-1 - Title XII selection procedure-general.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... essential, a statement of work, estimate of personnel requirements, special requirements (logistic support... statement of work, an estimate of the personnel required, and special provisions (such as logistic support, government furnished equipment, and so forth), a proposed contract format, and evaluation criteria. No cost...

  5. Examination of environmentally friendly "green" logistics behavior of managers in the pharmaceutical sector using the Theory of Planned Behavior.

    PubMed

    Arslan, Miray; Şar, Sevgi

    2017-12-11

    Logistics activities play a prominent role in enabling manufacturers, distribution channels, and pharmacies to work in harmony. Nowadays these activities have become increasingly striking in the pharmaceutical industry and seen as a development area for this sector. Additionally, green practices are beginning to be more attracting particularly in decreasing costs and increasing image of pharmaceutical companies. The main objective of this study was modeling green logistics (GL) behavior of the managers in the pharmaceutical sector in the theory of planned behavior (TPB) frame via structural equation modeling (SEM). A measurement tool was developed according to TPB. Exploratory factor analysis was conducted to determine subfactors of GL behavior. In the second step, confirmatory factor analysis (CFA) was conducted for confirming whether there is a relationship between the observed variables and their underlying latent constructs. Finally, structural equation model was conducted to specify the relationships between latent variables. In the proposed green logistics behavior (GLB) model, the positive effect of environmental attitude towards GL, perceived behavioral control related GL, and subjective norm about GL on intention towards GL were found statistically significant. Nevertheless, the effect of attitude towards costs of GL on intention towards GL was not found statistically significant. Intention towards GL has been found to have a positive statistically significant effect on the GL behavior. Based on the results of this study, it is possible to say that TPB is an appropriate theory for modeling green logistics behavior of managers. This model can be seen as a guide to the companies in the pharmaceutical sector to participate in green logistics. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Item Response Theory Equating Using Bayesian Informative Priors.

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Patz, Richard J.

    This paper seeks to extend the application of Markov chain Monte Carlo (MCMC) methods in item response theory (IRT) to include the estimation of equating relationships along with the estimation of test item parameters. A method is proposed that incorporates estimation of the equating relationship in the item calibration phase. Item parameters from…

  7. Mean annual runoff and peak flow estimates based on channel geometry of streams in northeastern and western Montana

    USGS Publications Warehouse

    Parrett, Charles; Omang, R.J.; Hull, J.A.

    1983-01-01

    Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)

  8. Methods for estimating the magnitude and frequency of peak streamflows for unregulated streams in Oklahoma

    USGS Publications Warehouse

    Lewis, Jason M.

    2010-01-01

    Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.

  9. The National Streamflow Statistics Program: A Computer Program for Estimating Streamflow Statistics for Ungaged Sites

    USGS Publications Warehouse

    Ries(compiler), Kernell G.; With sections by Atkins, J. B.; Hummel, P.R.; Gray, Matthew J.; Dusenbury, R.; Jennings, M.E.; Kirby, W.H.; Riggs, H.C.; Sauer, V.B.; Thomas, W.O.

    2007-01-01

    The National Streamflow Statistics (NSS) Program is a computer program that should be useful to engineers, hydrologists, and others for planning, management, and design applications. NSS compiles all current U.S. Geological Survey (USGS) regional regression equations for estimating streamflow statistics at ungaged sites in an easy-to-use interface that operates on computers with Microsoft Windows operating systems. NSS expands on the functionality of the USGS National Flood Frequency Program, and replaces it. The regression equations included in NSS are used to transfer streamflow statistics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, the equations were developed on a statewide or metropolitan-area basis as part of cooperative study programs. Equations are available for estimating rural and urban flood-frequency statistics, such as the 1 00-year flood, for every state, for Puerto Rico, and for the island of Tutuila, American Samoa. Equations are available for estimating other statistics, such as the mean annual flow, monthly mean flows, flow-duration percentiles, and low-flow frequencies (such as the 7-day, 0-year low flow) for less than half of the states. All equations available for estimating streamflow statistics other than flood-frequency statistics assume rural (non-regulated, non-urbanized) conditions. The NSS output provides indicators of the accuracy of the estimated streamflow statistics. The indicators may include any combination of the standard error of estimate, the standard error of prediction, the equivalent years of record, or 90 percent prediction intervals, depending on what was provided by the authors of the equations. The program includes several other features that can be used only for flood-frequency estimation. These include the ability to generate flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals, estimates of the probable maximum flood, extrapolation of the 500-year flood when an equation for estimating it is not available, and weighting techniques to improve flood-frequency estimates for gaging stations and ungaged sites on gaged streams. This report describes the regionalization techniques used to develop the equations in NSS and provides guidance on the applicability and limitations of the techniques. The report also includes a users manual and a summary of equations available for estimating basin lagtime, which is needed by the program to generate flood hydrographs. The NSS software and accompanying database, and the documentation for the regression equations included in NSS, are available on the Web at http://water.usgs.gov/software/.

  10. Bayesian parameter estimation for nonlinear modelling of biological pathways.

    PubMed

    Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang

    2011-01-01

    The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.

  11. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    PubMed

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  12. Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data

    PubMed Central

    Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

    2014-01-01

    In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438

  13. Age Estimation of Infants Through Metric Analysis of Developing Anterior Deciduous Teeth.

    PubMed

    Viciano, Joan; De Luca, Stefano; Irurita, Javier; Alemán, Inmaculada

    2018-01-01

    This study provides regression equations for estimation of age of infants from the dimensions of their developing deciduous teeth. The sample comprises 97 individuals of known sex and age (62 boys, 35 girls), aged between 2 days and 1,081 days. The age-estimation equations were obtained for the sexes combined, as well as for each sex separately, thus including "sex" as an independent variable. The values of the correlations and determination coefficients obtained for each regression equation indicate good fits for most of the equations obtained. The "sex" factor was statistically significant when included as an independent variable in seven of the regression equations. However, the "sex" factor provided an advantage for age estimation in only three of the equations, compared to those that did not include "sex" as a factor. These data suggest that the ages of infants can be accurately estimated from measurements of their developing deciduous teeth. © 2017 American Academy of Forensic Sciences.

  14. Kinetic compensation effect in logistic distributed activation energy model for lignocellulosic biomass pyrolysis.

    PubMed

    Xu, Di; Chai, Meiyun; Dong, Zhujun; Rahman, Md Maksudur; Yu, Xi; Cai, Junmeng

    2018-06-04

    The kinetic compensation effect in the logistic distributed activation energy model (DAEM) for lignocellulosic biomass pyrolysis was investigated. The sum of square error (SSE) surface tool was used to analyze two theoretically simulated logistic DAEM processes for cellulose and xylan pyrolysis. The logistic DAEM coupled with the pattern search method for parameter estimation was used to analyze the experimental data of cellulose pyrolysis. The results showed that many parameter sets of the logistic DAEM could fit the data at different heating rates very well for both simulated and experimental processes, and a perfect linear relationship between the logarithm of the frequency factor and the mean value of the activation energy distribution was found. The parameters of the logistic DAEM can be estimated by coupling the optimization method and isoconversional kinetic methods. The results would be helpful for chemical kinetic analysis using DAEM. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Perceived everyday racism, residential segregation, and HIV testing among patients at a sexually transmitted disease clinic.

    PubMed

    Ford, Chandra L; Daniel, Mark; Earp, Jo Anne L; Kaufman, Jay S; Golin, Carol E; Miller, William C

    2009-04-01

    More than one quarter of HIV-infected people are undiagnosed and therefore unaware of their HIV-positive status. Blacks are disproportionately infected. Although perceived racism influences their attitudes toward HIV prevention, how racism influences their behaviors is unknown. We sought to determine whether perceiving everyday racism and racial segregation influence Black HIV testing behavior. This was a clinic-based, multilevel study in a North Carolina city. Eligibility was limited to Blacks (N = 373) seeking sexually transmitted disease diagnosis or screening. We collected survey data, block group characteristics, and lab-confirmed HIV testing behavior. We estimated associations using logistic regression with generalized estimating equations. More than 90% of the sample perceived racism, which was associated with higher odds of HIV testing (odds ratio = 1.64; 95% confidence interval = 1.07, 2.52), after control for residential segregation, and other covariates. Neither patient satisfaction nor mechanisms for coping with stress explained the association. Perceiving everyday racism is not inherently detrimental. Perceived racism may improve odds of early detection of HIV infection in this high-risk population. How segregation influences HIV testing behavior warrants further research.

  16. Alternative Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas using PRESS Minimization

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.

  17. Estimating equations for glomerular filtration rate in the era of creatinine standardization: a systematic review.

    PubMed

    Earley, Amy; Miskulin, Dana; Lamb, Edmund J; Levey, Andrew S; Uhlig, Katrin

    2012-06-05

    Clinical laboratories are increasingly reporting estimated glomerular filtration rate (GFR) by using serum creatinine assays traceable to a standard reference material. To review the performance of GFR estimating equations to inform the selection of a single equation by laboratories and the interpretation of estimated GFR by clinicians. A systematic search of MEDLINE, without language restriction, between 1999 and 21 October 2011. Cross-sectional studies in adults that compared the performance of 2 or more creatinine-based GFR estimating equations with a reference GFR measurement. Eligible equations were derived or reexpressed and validated by using creatinine measurements traceable to the standard reference material. Reviewers extracted data on study population characteristics, measured GFR, creatinine assay, and equation performance. Eligible studies compared the MDRD (Modification of Diet in Renal Disease) Study and CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) equations or modifications thereof. In 12 studies in North America, Europe, and Australia, the CKD-EPI equation performed better at higher GFRs (approximately >60 mL/min per 1.73 m(2)) and the MDRD Study equation performed better at lower GFRs. In 5 of 8 studies in Asia and Africa, the equations were modified to improve their performance by adding a coefficient derived in the local population or removing a coefficient. Methods of GFR measurement and study populations were heterogeneous. Neither the CKD-EPI nor the MDRD Study equation is optimal for all populations and GFR ranges. Using a single equation for reporting requires a tradeoff to optimize performance at either higher or lower GFR ranges. A general practice and public health perspective favors the CKD-EPI equation. Kidney Disease: Improving Global Outcomes.

  18. A Comparison of Four Linear Equating Methods for the Common-Item Nonequivalent Groups Design Using Simulation Methods. ACT Research Report Series, 2013 (2)

    ERIC Educational Resources Information Center

    Topczewski, Anna; Cui, Zhongmin; Woodruff, David; Chen, Hanwei; Fang, Yu

    2013-01-01

    This paper investigates four methods of linear equating under the common item nonequivalent groups design. Three of the methods are well known: Tucker, Angoff-Levine, and Congeneric-Levine. A fourth method is presented as a variant of the Congeneric-Levine method. Using simulation data generated from the three-parameter logistic IRT model we…

  19. Preliminary analysis of an integrated logistics system for OSSA payloads. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Palguta, T.; Bradley, W.; Stockton, T.

    1988-01-01

    The purpose is to describe the logistics study background and approach to providing estimates of of logistics support requirements for Office of Space Science and Applications' payloads in the Space Station era. A concise summary is given of the study results. Future logistics support analysis tasks are identified.

  20. The fifth-order partial differential equation for the description of the α + β Fermi-Pasta-Ulam model

    NASA Astrophysics Data System (ADS)

    Kudryashov, Nikolay A.; Volkov, Alexandr K.

    2017-01-01

    We study a new nonlinear partial differential equation of the fifth order for the description of perturbations in the Fermi-Pasta-Ulam mass chain. This fifth-order equation is an expansion of the Gardner equation for the description of the Fermi-Pasta-Ulam model. We use the potential of interaction between neighbouring masses with both quadratic and cubic terms. The equation is derived using the continuous limit. Unlike the previous works, we take into account higher order terms in the Taylor series expansions. We investigate the equation using the Painlevé approach. We show that the equation does not pass the Painlevé test and can not be integrated by the inverse scattering transform. We use the logistic function method and the Laurent expansion method to find travelling wave solutions of the fifth-order equation. We use the pseudospectral method for the numerical simulation of wave processes, described by the equation.

  1. Estimating Slash Quantity from Standing Loblolly Pine

    Treesearch

    Dale D. Wade

    1969-01-01

    No significant difference were found between variances of two prediction equations for estimating loblolly pine crown weight from diameter breast height (d.b.h). One equation was developed from trees on the Georgia Piedmont and the other from tress on the South Carolina Coastal Plain. An equation and table are presented for estimating loblolly pine slash weights from...

  2. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  3. Estimating glomerular filtration rate in black South Africans by use of the modification of diet in renal disease and Cockcroft-Gault equations.

    PubMed

    van Deventer, Hendrick E; George, Jaya A; Paiker, Janice E; Becker, Piet J; Katz, Ivor J

    2008-07-01

    The 4-variable Modification of Diet in Renal Disease (4-v MDRD) and Cockcroft-Gault (CG) equations are commonly used for estimating glomerular filtration rate (GFR); however, neither of these equations has been validated in an indigenous African population. The aim of this study was to evaluate the performance of the 4-v MDRD and CG equations for estimating GFR in black South Africans against measured GFR and to assess the appropriateness for the local population of the ethnicity factor established for African Americans in the 4-v MDRD equation. We enrolled 100 patients in the study. The plasma clearance of chromium-51-EDTA ((51)Cr-EDTA) was used to measure GFR, and serum creatinine was measured using an isotope dilution mass spectrometry (IDMS) traceable assay. We estimated GFR using both the reexpressed 4-v MDRD and CG equations and compared it to measured GFR using 4 modalities: correlation coefficient, weighted Deming regression analysis, percentage bias, and proportion of estimated GFR within 30% of measured GFR (P(30)). The Spearman correlation coefficient between measured and estimated GFR for both equations was similar (4-v MDRD R(2) = 0.80 and CG R(2) = 0.79). Using the 4-v MDRD equation with the ethnicity factor of 1.212 as established for African Americans resulted in a median positive bias of 13.1 (95% CI 5.5 to 18.3) mL/min/1.73 m(2). Without the ethnicity factor, median bias was 1.9 (95% CI -0.8 to 4.5) mL/min/1.73 m(2). The 4-v MDRD equation, without the ethnicity factor of 1.212, can be used for estimating GFR in black South Africans.

  4. Comparison of anthropometric-based equations for estimation of body fat percentage in a normal-weight and overweight female cohort: validation via air-displacement plethysmography.

    PubMed

    Temple, Derry; Denis, Romain; Walsh, Marianne C; Dicker, Patrick; Byrne, Annette T

    2015-02-01

    To evaluate the accuracy of the most commonly used anthropometric-based equations in the estimation of percentage body fat (%BF) in both normal-weight and overweight women using air-displacement plethysmography (ADP) as the criterion measure. A comparative study in which the equations of Durnin and Womersley (1974; DW) and Jackson, Pollock and Ward (1980) at three, four and seven sites (JPW₃, JPW₄ and JPW₇) were validated against ADP in three groups. Group 1 included all participants, group 2 included participants with a BMI <25·0 kg/m² and group 3 included participants with a BMI ≥25·0 kg/m². Human Performance Laboratory, Institute for Sport and Health, University College Dublin, Republic of Ireland. Forty-three female participants aged between 18 and 55 years. In all three groups, the %BF values estimated from the DW equation were closer to the criterion measure (i.e. ADP) than those estimated from the other equations. Of the three JPW equations, JPW₃ provided the most accurate estimation of %BF when compared with ADP in all three groups. In comparison to ADP, these findings suggest that the DW equation is the most accurate anthropometric method for the estimation of %BF in both normal-weight and overweight females.

  5. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  6. Assessing the Reliability of Regional Depth-Duration-Frequency Equations for Gauged and Ungauged Sites

    NASA Astrophysics Data System (ADS)

    Castellarin, A.; Montanari, A.; Brath, A.

    2002-12-01

    The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.

  7. Estimating glomerular filtration rate (GFR) in children. The average between a cystatin C- and a creatinine-based equation improves estimation of GFR in both children and adults and enables diagnosing Shrunken Pore Syndrome.

    PubMed

    Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders

    2017-09-01

    Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.

  8. Bifurcation approach to a logistic elliptic equation with a homogeneous incoming flux boundary condition

    NASA Astrophysics Data System (ADS)

    Umezu, Kenichiro

    In this paper, we consider a semilinear elliptic boundary value problem in a smooth bounded domain, having the so-called logistic nonlinearity that originates from population dynamics, with a nonlinear boundary condition. Although the logistic nonlinearity has an absorption effect in the problem, the nonlinear boundary condition is induced by the homogeneous incoming flux on the boundary. The objective of our study is to analyze the existence of a bifurcation component of positive solutions from trivial solutions and its asymptotic behavior and stability. We perform this analysis using the method developed by Lyapunov and Schmidt, based on a scaling argument.

  9. Prostate Cancer Predictive Simulation Modelling, Assessing the Risk Technique (PCP-SMART): Introduction and Initial Clinical Efficacy Evaluation Data Presentation of a Simple Novel Mathematical Simulation Modelling Method, Devised to Predict the Outcome of Prostate Biopsy on an Individual Basis.

    PubMed

    Spyropoulos, Evangelos; Kotsiris, Dimitrios; Spyropoulos, Katherine; Panagopoulos, Aggelos; Galanakis, Ioannis; Mavrikos, Stamatios

    2017-02-01

    We developed a mathematical "prostate cancer (PCa) conditions simulating" predictive model (PCP-SMART), from which we derived a novel PCa predictor (prostate cancer risk determinator [PCRD] index) and a PCa risk equation. We used these to estimate the probability of finding PCa on prostate biopsy, on an individual basis. A total of 371 men who had undergone transrectal ultrasound-guided prostate biopsy were enrolled in the present study. Given that PCa risk relates to the total prostate-specific antigen (tPSA) level, age, prostate volume, free PSA (fPSA), fPSA/tPSA ratio, and PSA density and that tPSA ≥ 50 ng/mL has a 98.5% positive predictive value for a PCa diagnosis, we hypothesized that correlating 2 variables composed of 3 ratios (1, tPSA/age; 2, tPSA/prostate volume; and 3, fPSA/tPSA; 1 variable including the patient's tPSA and the other, a tPSA value of 50 ng/mL) could operate as a PCa conditions imitating/simulating model. Linear regression analysis was used to derive the coefficient of determination (R 2 ), termed the PCRD index. To estimate the PCRD index's predictive validity, we used the χ 2 test, multiple logistic regression analysis with PCa risk equation formation, calculation of test performance characteristics, and area under the receiver operating characteristic curve analysis using SPSS, version 22 (P < .05). The biopsy findings were positive for PCa in 167 patients (45.1%) and negative in 164 (44.2%). The PCRD index was positively signed in 89.82% positive PCa cases and negative in 91.46% negative PCa cases (χ 2 test; P < .001; relative risk, 8.98). The sensitivity was 89.8%, specificity was 91.5%, positive predictive value was 91.5%, negative predictive value was 89.8%, positive likelihood ratio was 10.5, negative likelihood ratio was 0.11, and accuracy was 90.6%. Multiple logistic regression revealed the PCRD index as an independent PCa predictor, and the formulated risk equation was 91% accurate in predicting the probability of finding PCa. On the receiver operating characteristic analysis, the PCRD index (area under the curve, 0.926) significantly (P < .001) outperformed other, established PCa predictors. The PCRD index effectively predicted the prostate biopsy outcome, correctly identifying 9 of 10 men who were eventually diagnosed with PCa and correctly ruling out PCa for 9 of 10 men who did not have PCa. Its predictive power significantly outperformed established PCa predictors, and the formulated risk equation accurately calculated the probability of finding cancer on biopsy, on an individual patient basis. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Estimating value and volume of ponderosa pine trees by equations.

    Treesearch

    Martin E. Plank

    1981-01-01

    Equations for estimating the selling value and tally volume for ponderosa pine lumber from the standing trees are described. Only five characteristics are required for the equations. Development and application of the system are described.

  11. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  12. Forecasting Lightning at Kennedy Space Center/Cape Canaveral Air Force Station, Florida

    NASA Technical Reports Server (NTRS)

    Lambert, Winfred; Wheeler, Mark; Roeder, William

    2005-01-01

    The Applied Meteorology Unit (AMU) developed a set of statistical forecast equations that provide a probability of lightning occurrence on Kennedy Space Center (KSC) I Cape Canaveral Air Force Station (CCAFS) for the day during the warm season (May September). The 45th Weather Squadron (45 WS) forecasters at CCAFS in Florida include a probability of lightning occurrence in their daily 24-hour and weekly planning forecasts, which are briefed at 1100 UTC (0700 EDT). This information is used for general scheduling of operations at CCAFS and KSC. Forecasters at the Spaceflight Meteorology Group also make thunderstorm forecasts for the KSC/CCAFS area during Shuttle flight operations. Much of the current lightning probability forecast at both groups is based on a subjective analysis of model and observational data. The objective tool currently available is the Neumann-Pfeffer Thunderstorm Index (NPTI, Neumann 1971), developed specifically for the KSCICCAFS area over 30 years ago. However, recent studies have shown that 1-day persistence provides a better forecast than the NPTI, indicating that the NPTI needed to be upgraded or replaced. Because they require a tool that provides a reliable estimate of the daily thunderstorm probability forecast, the 45 WS forecasters requested that the AMU develop a new lightning probability forecast tool using recent data and more sophisticated techniques now possible through more computing power than that available over 30 years ago. The equation development incorporated results from two research projects that investigated causes of lightning occurrence near KSCICCAFS and over the Florida peninsula. One proved that logistic regression outperformed the linear regression method used in NPTI, even when the same predictors were used. The other study found relationships between large scale flow regimes and spatial lightning distributions over Florida. Lightning, probabilities based on these flow regimes were used as candidate predictors in the equation development. Fifteen years (1 989-2003) of warm season data were used to develop the forecast equations. The data sources included a local network of cloud-to-ground lightning sensors called the Cloud-to-Ground Lightning Surveillance System (CGLSS), 1200 UTC Florida synoptic soundings, and the 1000 UTC CCAFS sounding. Data from CGLSS were used to determine lightning occurrence for each day. The 1200 UTC soundings were used to calculate the synoptic-scale flow regimes and the 1000 UTC soundings were used to calculate local stability parameters, which were used as candidate predictors of lightning occurrence. Five logistic regression forecast equations were created through careful selection and elimination of the candidate predictors. The resulting equations contain five to six predictors each. Results from four performance tests indicated that the equations showed an increase in skill over several standard forecasting methods, good reliability, an ability to distinguish between non-lightning and lightning days, and good accuracy measures and skill scores. Given the overall good performance the 45 WS requested that the equations be transitioned to operations and added to the current set of tools used to determine the daily lightning probability of occurrence.

  13. Effect of urine urea nitrogen and protein intake adjusted by using the estimated urine creatinine excretion rate on the antiproteinuric effect of angiotensin II type I receptor blockers.

    PubMed

    Chin, Ho Jun; Kim, Dong Ki; Park, Jung Hwan; Shin, Sung Joon; Lee, Sang Ho; Choi, Bum Soon; Kim, Suhnggwon; Lim, Chun Soo

    2015-01-01

    The aim of this study was to determine the role of protein intake on proteinuria in chronic kidney disease (CKD), as it is presently not conclusive. This is a subanalysis of data from an open-label, case-controlled, randomized clinical trial on education about low-salt diets (NCT01552954). We estimated the urine excretion rate of parameters in a day, adjusted by using the equation for estimating urine creatinine excretion, and analyzed the effect of urine urea nitrogen (UUN), as well as estimating protein intake on the level of albuminuria in hypertensive patients with chronic kidney disease. Among 174 participants from whom complete 24-h urine specimens were collected, the estimates from the Tanaka equation resulted in the highest accuracy for the urinary excretion rate of creatinine, sodium, albumin, and UUN. Among 227 participants, the baseline value of estimated urine albumin excretion (eUalb) was positively correlated with the estimated UUN (eUUN) or protein intake according to eUUN (P = 0.012 and P = 0.038, respectively). We were able to calculate the ratios of eUalb and eUUN in 221 participants and grouped them according to the ratio of eUUN during 16-wk trial period. The proportion of patients that achieved a decrement of eUalb ≥25% during 16 wk with an angiotensin II type I receptor blocker (ARB) medication was 80% (24 of 30) in group 1, with eUUN ratio ≤-25%; 82.2% (111 of 135) in group 2, with eUUN ratio between -25% and 25%; and 66.1% (37 and 56) in group 3, with eUUN ratio ≥25% (P = 0.048). The probability of a decrease in albuminuria with ARB treatment was lower in patients with an increase of eUUN or protein intake during the 16 wk of ARB treatment, as observed in multiple logistic regression analysis as well. The estimated urine urea excretion rate showed a positive association with the level of albuminuria in hypertensive patients with chronic kidney disease. The increase of eUUN excretion ameliorated the antiproteinuric effect of ARB. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Racial/ethnic and educational differences in the estimated odds of recent nitrite use among adult household residents in the United States: an illustration of matching and conditional logistic regression.

    PubMed

    Delva, J; Spencer, M S; Lin, J K

    2000-01-01

    This article compares estimates of the relative odds of nitrite use obtained from weighted unconditional logistic regression with estimates obtained from conditional logistic regression after post-stratification and matching of cases with controls by neighborhood of residence. We illustrate these methods by comparing the odds associated with nitrite use among adults of four racial/ethnic groups, with and without a high school education. We used aggregated data from the 1994-B through 1996 National Household Survey on Drug Abuse (NHSDA). Difference between the methods and implications for analysis and inference are discussed.

  15. Estimating the required logistical resources to support the development of a sustainable corn stover bioeconomy in the USA

    DOE PAGES

    Ebadian, Mahmood; Sokhansanj, Shahabaddine; Webb, Erin

    2016-11-23

    In this paper, the logistical resources required to develop a bioeconomy based on corn stover in the USA are quantified, including field equipment, storage sites, transportation and handling equipment, workforce, corn growers, and corn lands. These resources are essential to mobilize large quantities of corn stover from corn fields to biorefineries. The logistical resources are estimated over the lifetime of the biorefineries. Seventeen corn-growing states are considered for the logistical resource assessment. Over 6.8 billion gallons of cellulosic ethanol can be produced annually from 108 million dry tons of corn stover in these states. The maximum number of required fieldmore » equipment (i.e., choppers, balers, collectors, loaders, and tractors) is estimated to be 194 110 units with a total economic value of about 26 billion dollars. In addition, 40 780 trucks and flatbed trailers would be required to transport bales from corn fields and storage sites to biorefineries with a total economic value of 4.0 billion dollars. About 88 899 corn growers need to be contracted with an annual net income of over 2.1 billion dollars. About 1903 storage sites would be required to hold 53.1 million dry tons of inventory after the harvest season. These storage sites would take up about 35 320.2 acres and 4077 loaders with an economic value of 0.4 billion dollars would handle this inventory. The total required workforce to run the logistics operations is estimated to be 50 567. Furthermore, the magnitude of the estimated logistical resources demonstrates the economic and social significance of the corn stover bioeconomy in rural areas in the USA.« less

  16. Estimating the required logistical resources to support the development of a sustainable corn stover bioeconomy in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebadian, Mahmood; Sokhansanj, Shahabaddine; Webb, Erin

    In this paper, the logistical resources required to develop a bioeconomy based on corn stover in the USA are quantified, including field equipment, storage sites, transportation and handling equipment, workforce, corn growers, and corn lands. These resources are essential to mobilize large quantities of corn stover from corn fields to biorefineries. The logistical resources are estimated over the lifetime of the biorefineries. Seventeen corn-growing states are considered for the logistical resource assessment. Over 6.8 billion gallons of cellulosic ethanol can be produced annually from 108 million dry tons of corn stover in these states. The maximum number of required fieldmore » equipment (i.e., choppers, balers, collectors, loaders, and tractors) is estimated to be 194 110 units with a total economic value of about 26 billion dollars. In addition, 40 780 trucks and flatbed trailers would be required to transport bales from corn fields and storage sites to biorefineries with a total economic value of 4.0 billion dollars. About 88 899 corn growers need to be contracted with an annual net income of over 2.1 billion dollars. About 1903 storage sites would be required to hold 53.1 million dry tons of inventory after the harvest season. These storage sites would take up about 35 320.2 acres and 4077 loaders with an economic value of 0.4 billion dollars would handle this inventory. The total required workforce to run the logistics operations is estimated to be 50 567. Furthermore, the magnitude of the estimated logistical resources demonstrates the economic and social significance of the corn stover bioeconomy in rural areas in the USA.« less

  17. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy.

    PubMed

    Gurka, Matthew J; Kuperminc, Michelle N; Busby, Marjorie G; Bennis, Jacey A; Grossberg, Richard I; Houlihan, Christine M; Stevenson, Richard D; Henderson, Richard C

    2010-02-01

    To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I-V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. Slaughter's equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat -9.6/100 [SD 6.2]; 95% confidence interval [CI] -11.0 to -8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI -1.0 to 1.3) than existing equations. A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP.

  18. Validation of Core Temperature Estimation Algorithm

    DTIC Science & Technology

    2016-01-29

    plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.

  19. Gradient estimates on the weighted p-Laplace heat equation

    NASA Astrophysics Data System (ADS)

    Wang, Lin Feng

    2018-01-01

    In this paper, by a regularization process we derive new gradient estimates for positive solutions to the weighted p-Laplace heat equation when the m-Bakry-Émery curvature is bounded from below by -K for some constant K ≥ 0. When the potential function is constant, which reduce to the gradient estimate established by Ni and Kotschwar for positive solutions to the p-Laplace heat equation on closed manifolds with nonnegative Ricci curvature if K ↘ 0, and reduce to the Davies, Hamilton and Li-Xu's gradient estimates for positive solutions to the heat equation on closed manifolds with Ricci curvature bounded from below if p = 2.

  20. Improvement of Method for Estimation of Site Amplification Factor Based on Average Shear-wave Velocity of Ground

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Makoto; Midorikawa, Saburoh

    The empirical equation for estimating the site amplification factor of ground motion by the average shear-wave velocity of ground (AVS) is examined. In the existing equations, the coefficient on dependence of the amplification factor on the AVS was treated as constant. The analysis showed that the coefficient varies with change of the AVS for short periods. A new estimation equation was proposed considering the dependence on the AVS. The new equation can represent soil characteristics that the softer soil has the longer predominant period, and can make better estimations for short periods than the existing method.

  1. Linear Logistic Test Modeling with R

    ERIC Educational Resources Information Center

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  2. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  3. Modeling of chemical inhibition from amyloid protein aggregation kinetics.

    PubMed

    Vázquez, José Antonio

    2014-02-27

    The process of amyloid proteins aggregation causes several human neuropathologies. In some cases, e.g. fibrillar deposits of insulin, the problems are generated in the processes of production and purification of protein and in the pump devices or injectable preparations for diabetics. Experimental kinetics and adequate modelling of chemical inhibition from amyloid aggregation are of practical importance in order to study the viable processing, formulation and storage as well as to predict and optimize the best conditions to reduce the effect of protein nucleation. In this manuscript, experimental data of insulin, Aβ42 amyloid protein and apomyoglobin fibrillation from recent bibliography were selected to evaluate the capability of a bivariate sigmoid equation to model them. The mathematical functions (logistic combined with Weibull equation) were used in reparameterized form and the effect of inhibitor concentrations on kinetic parameters from logistic equation were perfectly defined and explained. The surfaces of data were accurately described by proposed model and the presented analysis characterized the inhibitory influence on the protein aggregation by several chemicals. Discrimination between true and apparent inhibitors was also confirmed by the bivariate equation. EGCG for insulin (working at pH = 7.4/T = 37°C) and taiwaniaflavone for Aβ42 were the compounds studied that shown the greatest inhibition capacity. An accurate, simple and effective model to investigate the inhibition of chemicals on amyloid protein aggregation has been developed. The equation could be useful for the clear quantification of inhibitor potential of chemicals and rigorous comparison among them.

  4. Extremal equilibria for reaction-diffusion equations in bounded domains and applications

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bernal, Aníbal; Vidal-López, Alejandro

    We show the existence of two special equilibria, the extremal ones, for a wide class of reaction-diffusion equations in bounded domains with several boundary conditions, including non-linear ones. They give bounds for the asymptotic dynamics and so for the attractor. Some results on the existence and/or uniqueness of positive solutions are also obtained. As a consequence, several well-known results on the existence and/or uniqueness of solutions for elliptic equations are revisited in a unified way obtaining, in addition, information on the dynamics of the associated parabolic problem. Finally, we ilustrate the use of the general results by applying them to the case of logistic equations. In fact, we obtain a detailed picture of the positive dynamics depending on the parameters appearing in the equation.

  5. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams.

    PubMed

    Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong

    2017-12-28

    Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which obtained the highest precision. All adjustment strategies through logistic regression were biased for causal effect estimation, while IPW-based-MSM could always obtain unbiased estimation when the adjusted set satisfied G-admissibility. Thus, IPW-based-MSM was recommended to adjust for confounders set.

  6. Effect of individual parameter changes on the outcome of the estimated short-term dietary exposure to pesticides.

    PubMed

    van der Velde-Koerts, Trijntje; Breysse, Nicolas; Pattingre, Lauriane; Hamey, Paul Y; Lutze, Jason; Mahieu, Karin; Margerison, Sam; Ossendorp, Bernadette C; Reich, Hermine; Rietveld, Anton; Sarda, Xavier; Vial, Gaelle; Sieke, Christian

    2018-06-03

    In 2015 a scientific workshop was held in Geneva, where updating the International Estimate of Short-Term Intake (IESTI) equations was suggested. This paper studies the effects of the proposed changes in residue inputs, large portions, variability factors and unit weights on the overall short-term dietary exposure estimate. Depending on the IESTI case equation, a median increase in estimated overall exposure by a factor of 1.0-6.8 was observed when the current IESTI equations are replaced by the proposed IESTI equations. The highest increase in the estimated exposure arises from the replacement of the median residue (STMR) by the maximum residue limit (MRL) for bulked and blended commodities (case 3 equations). The change in large portion parameter does not have a significant impact on the estimated exposure. The use of large portions derived from the general population covering all age groups and bodyweights should be avoided when large portions are not expressed on an individual bodyweight basis. Replacement of the highest residue (HR) by the MRL and removal of the unit weight each increase the estimated exposure for small-, medium- and large-sized commodities (case 1, case 2a or case 2b equations). However, within the EU framework lowering of the variability factor from 7 or 5 to 3 counterbalances the effect of changes in other parameters, resulting in an estimated overall exposure change for the EU situation of a factor of 0.87-1.7 and 0.6-1.4 for IESTI case 2a and case 2b equations, respectively.

  7. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  8. Peak flow regression equations For small, ungaged streams in Maine: Comparing map-based to field-based variables

    USGS Publications Warehouse

    Lombard, Pamela J.; Hodgkins, Glenn A.

    2015-01-01

    Regression equations to estimate peak streamflows with 1- to 500-year recurrence intervals (annual exceedance probabilities from 99 to 0.2 percent, respectively) were developed for small, ungaged streams in Maine. Equations presented here are the best available equations for estimating peak flows at ungaged basins in Maine with drainage areas from 0.3 to 12 square miles (mi2). Previously developed equations continue to be the best available equations for estimating peak flows for basin areas greater than 12 mi2. New equations presented here are based on streamflow records at 40 U.S. Geological Survey streamgages with a minimum of 10 years of recorded peak flows between 1963 and 2012. Ordinary least-squares regression techniques were used to determine the best explanatory variables for the regression equations. Traditional map-based explanatory variables were compared to variables requiring field measurements. Two field-based variables—culvert rust lines and bankfull channel widths—either were not commonly found or did not explain enough of the variability in the peak flows to warrant inclusion in the equations. The best explanatory variables were drainage area and percent basin wetlands; values for these variables were determined with a geographic information system. Generalized least-squares regression was used with these two variables to determine the equation coefficients and estimates of accuracy for the final equations.

  9. Probability of reduced renal function after contrast-enhanced CT: a model based on serum creatinine level, patient age, and estimated glomerular filtration rate.

    PubMed

    Herts, Brian R; Schneider, Erika; Obuchowski, Nancy; Poggio, Emilio; Jain, Anil; Baker, Mark E

    2009-08-01

    The objectives of our study were to develop a model to predict the probability of reduced renal function after outpatient contrast-enhanced CT (CECT)--based on patient age, sex, and race and on serum creatinine level before CT or directly based on estimated glomerular filtration rate (GFR) before CT--and to determine the relationship between patients with changes in creatinine level that characterize contrast-induced nephropathy and patients with reduced GFR after CECT. Of 5,187 outpatients who underwent CECT, 963 (18.6%) had serum creatinine levels obtained within 6 months before and 4 days after CECT. The estimated GFR was calculated before and after CT using the four-variable Modification of Diet in Renal Disease (MDRD) Study equation. Pre-CT serum creatinine level, age, race, sex, and pre-CT estimated GFR were tested using multiple-variable logistic regression models to determine the probability of having an estimated GFR of < 60 and < 45 mL/min/1.73 m(2) after CECT. Two thirds of the patients were used to create and one third to test the models. We also determined discordance between patients who met standard definitions of contrast-induced nephropathy and those with a reduced estimated GFR after CECT. Significant (p < 0.002) predictors for a post-CT estimated GFR of < 60 mL/min/1.73 m(2) were age, race, sex, pre-CT serum creatinine level, and pre-CT estimated GFR. Sex, serum creatinine level, and pre-CT estimated GFR were significant factors (p < 0.001) for predicting a post-CT estimated GFR of < 45 mL/min/1.73 m(2). The probability is [exp(y) / (1 + exp(y))], where y = 6.21 - (0.10 x pre-CT estimated GFR) for an estimated GFR of < 60 mL/min/1.73 m(2), and y = 3.66 - (0.087 x pre-CT estimated GFR) for an estimated GFR of < 45 mL/min/1.73 m(2). A discrepancy between those who met contrast-induced nephropathy criteria by creatinine changes and those with a post-CT estimated GFR of < 60 mL/min/1.73 m(2) was detected in 208 of the 963 patients (21.6%). The probability of a reduced estimated GFR after CECT can be predicted by the pre-CT estimated GFR using the four-variable MDRD equation. Furthermore, standard criteria for contrast-induced nephropathy are poor predictors of poor renal function after CECT. Criteria need to be established for what is an acceptable risk to manage patients undergoing CECT.

  10. Exploring the Parents' Attitudes and Perceptions About School Breakfast to Understand Why Participation Is Low in a Rural Midwest State.

    PubMed

    Askelson, Natoshia M; Golembiewski, Elizabeth H; Ghattas, Andrew; Williams, Steven; Delger, Patti J; Scheidel, Carrie A

    2017-02-01

    To explore parental attitudes and perceptions about the school breakfast program in a state with low school breakfast participation. A cross-sectional study design that used an online survey completed by parents supplemented with district data from a state department of education. The survey included quantitative and qualitative components. A rural Midwestern state with low school breakfast participation. Parents and caregivers of children in grades 1-12 were recruited through schools to complete a survey (n = 7,209). Participation in a school breakfast program. A generalized estimating equation model was used to analyze the data and account for the possible correlation among students from the same school district. Open-end survey items were coded. Parents identified several structural and logistic barriers in response to open-ended survey items. Factors associated with breakfast participation include perceived benefits, stigma related to those for whom breakfast is intended, and the importance of breakfast. Interventions should be designed to test whether changing parent perceptions and decreasing stigma will lead to increased breakfast participation. Policy, systems, and environment changes addressing the structural and logistic barriers also may have the potential to increase participation. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. Homelessness among a cohort of women in street-based sex work: the need for safer environment interventions.

    PubMed

    Duff, Putu; Deering, Kathleen; Gibson, Kate; Tyndall, Mark; Shannon, Kate

    2011-08-12

    Drawing on data from a community-based prospective cohort study in Vancouver, Canada, we examined the prevalence and individual, interpersonal and work environment correlates of homelessness among 252 women in street-based sex work. Bivariate and multivariate logistic regression using generalized estimating equations (GEE) was used to examine the individual, interpersonal and work environment factors that were associated with homelessness among street-based sex workers. Among 252 women, 43.3% reported homelessness over an 18-month follow-up period. In the multivariable GEE logistic regression analysis, younger age (adjusted odds ratio [aOR] = 0.93; 95%confidence interval [95%CI] 0.93-0.98), sexual violence by non-commercial partners (aOR = 2.14; 95%CI 1.06-4.34), servicing a higher number of clients (10+ per week vs < 10) (aOR = 1.68; 95%CI 1.05-2.69), intensive, daily crack use (aOR = 1.65; 95%CI 1.11-2.45), and servicing clients in public spaces (aOR = 1.52; CI 1.00-2.31) were independently associated with sleeping on the street. These findings indicate a critical need for safer environment interventions that mitigate the social and physical risks faced by homeless FSWs and increase access to safe, secure housing for women.

  12. Association between Nurse Staffing and In-Hospital Bone Fractures: A Retrospective Cohort Study.

    PubMed

    Morita, Kojiro; Matsui, Hiroki; Fushimi, Kiyohide; Yasunaga, Hideo

    2017-06-01

    To determine if sufficient nurse staffing reduced in-hospital fractures in acute care hospitals. The Japanese Diagnosis Procedure Combination inpatient (DPC) database from July 2010 to March 2014 linked with the Surveys for Medical Institutions. We conducted a retrospective cohort study to examine the association of inpatient nurse-to-occupied bed ratio (NBR) with in-hospital fractures. Multivariable logistic regression with generalized estimating equations was performed, adjusting for patient characteristics and hospital characteristics. We identified 770,373 patients aged 50 years or older who underwent planned major surgery for some forms of cancer or cardiovascular diseases. We used ICD-10 codes and postoperative procedure codes to identify patients with in-hospital fractures. Hospital characteristics were obtained from the "Survey of Medical Institutions and Hospital Report" and "Annual Report for Functions of Medical Institutions." Overall, 662 (0.09 percent) in-hospital fractures were identified. Logistic regression analysis showed that the proportion of in-hospital fractures in the group with the highest NBR was significantly lower than that in the group with the lowest NBR (adjusted odd ratios, 0.67; 95 percent confidence interval, 0.44-0.99; p = .048). Sufficient nurse staffing may be important to reduce postsurgical in-hospital fractures in acute care hospitals. © Health Research and Educational Trust.

  13. Estimation of design floods in ungauged catchments using a regional index flood method. A case study of Lake Victoria Basin in Kenya

    NASA Astrophysics Data System (ADS)

    Nobert, Joel; Mugo, Margaret; Gadain, Hussein

    Reliable estimation of flood magnitudes corresponding to required return periods, vital for structural design purposes, is impacted by lack of hydrological data in the study area of Lake Victoria Basin in Kenya. Use of regional information, derived from data at gauged sites and regionalized for use at any location within a homogenous region, would improve the reliability of the design flood estimation. Therefore, the regional index flood method has been applied. Based on data from 14 gauged sites, a delineation of the basin into two homogenous regions was achieved using elevation variation (90-m DEM), spatial annual rainfall pattern and Principal Component Analysis of seasonal rainfall patterns (from 94 rainfall stations). At site annual maximum series were modelled using the Log normal (LN) (3P), Log Logistic Distribution (LLG), Generalized Extreme Value (GEV) and Log Pearson Type 3 (LP3) distributions. The parameters of the distributions were estimated using the method of probability weighted moments. Goodness of fit tests were applied and the GEV was identified as the most appropriate model for each site. Based on the GEV model, flood quantiles were estimated and regional frequency curves derived from the averaged at site growth curves. Using the least squares regression method, relationships were developed between the index flood, which is defined as the Mean Annual Flood (MAF) and catchment characteristics. The relationships indicated area, mean annual rainfall and altitude were the three significant variables that greatly influence the index flood. Thereafter, estimates of flood magnitudes in ungauged catchments within a homogenous region were estimated from the derived equations for index flood and quantiles from the regional curves. These estimates will improve flood risk estimation and to support water management and engineering decisions and actions.

  14. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  15. Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.

    PubMed

    Deboeck, Pascal R

    2010-08-06

    The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.

  16. Reaeration equations derived from U.S. geological survey database

    USGS Publications Warehouse

    Melching, C.S.; Flores, H.E.

    1999-01-01

    Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.

  17. Efficacy of generic allometric equations for estimating biomass: a test in Japanese natural forests.

    PubMed

    Ishihara, Masae I; Utsugi, Hajime; Tanouchi, Hiroyuki; Aiba, Masahiro; Kurokawa, Hiroko; Onoda, Yusuke; Nagano, Masahiro; Umehara, Toru; Ando, Makoto; Miyata, Rie; Hiura, Tsutom

    2015-07-01

    Accurate estimation of tree and forest biomass is key to evaluating forest ecosystem functions and the global carbon cycle. Allometric equations that estimate tree biomass from a set of predictors, such as stem diameter and tree height, are commonly used. Most allometric equations are site specific, usually developed from a small number of trees harvested in a small area, and are either species specific or ignore interspecific differences in allometry. Due to lack of site-specific allometries, local equations are often applied to sites for which they were not originally developed (foreign sites), sometimes leading to large errors in biomass estimates. In this study, we developed generic allometric equations for aboveground biomass and component (stem, branch, leaf, and root) biomass using large, compiled data sets of 1203 harvested trees belonging to 102 species (60 deciduous angiosperm, 32 evergreen angiosperm, and 10 evergreen gymnosperm species) from 70 boreal, temperate, and subtropical natural forests in Japan. The best generic equations provided better biomass estimates than did local equations that were applied to foreign sites. The best generic equations included explanatory variables that represent interspecific differences in allometry in addition to stem diameter, reducing error by 4-12% compared to the generic equations that did not include the interspecific difference. Different explanatory variables were selected for different components. For aboveground and stem biomass, the best generic equations had species-specific wood specific gravity as an explanatory variable. For branch, leaf, and root biomass, the best equations had functional types (deciduous angiosperm, evergreen angiosperm, and evergreen gymnosperm) instead of functional traits (wood specific gravity or leaf mass per area), suggesting importance of other traits in addition to these traits, such as canopy and root architecture. Inclusion of tree height in addition to stem diameter improved the performance of the generic equation only for stem biomass and had no apparent effect on aboveground, branch, leaf, and root biomass at the site level. The development of a generic allometric equation taking account of interspecific differences is an effective approach for accurately estimating aboveground and component biomass in boreal, temperate, and subtropical natural forests.

  18. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  19. Estimating residual kidney function in dialysis patients without urine collection

    PubMed Central

    Shafi, Tariq; Michels, Wieneke M.; Levey, Andrew S.; Inker, Lesley A.; Dekker, Friedo W.; Krediet, Raymond T.; Hoekstra, Tiny; Schwartz, George J.; Eckfeldt, John H.; Coresh, Josef

    2016-01-01

    Residual kidney function contributes substantially to solute clearance in dialysis patients but cannot be assessed without urine collection. We used serum filtration markers to develop dialysis-specific equations to estimate urinary urea clearance without the need for urine collection. In our development cohort, we measured 24-hour urine clearances under close supervision in 44 patients and validated these equations in 826 patients from the Netherlands Cooperative Study on the Adequacy of Dialysis. For the development and validation cohorts, median urinary urea clearance was 2.6 and 2.4 mL/min, respectively. During the 24-hour visit in the development cohort, serum β-trace protein concentrations remained in steady state but concentrations of all other markers increased. In the validation cohort, bias (median measured minus estimated clearance) was low for all equations. Precision was significantly better for β-trace protein and β2-microglobulin equations and the accuracy was significantly greater for β-trace protein, β2-microglobulin and cystatin C equations, compared with the urea plus creatinine equation. Area under the receiver operator characteristic curve for detecting measured urinary urea clearance by equation-estimated urinary urea clearance (both 2 mL/min or more) were 0.821, 0.850 and 0.796 for β-trace protein, β2-microglobulin and cystatin C equations, respectively; significantly greater than the 0.663 for the urea plus creatinine equation. Thus, residual renal function can be estimated in dialysis patients without urine collections. PMID:26924062

  20. Estimating residual kidney function in dialysis patients without urine collection.

    PubMed

    Shafi, Tariq; Michels, Wieneke M; Levey, Andrew S; Inker, Lesley A; Dekker, Friedo W; Krediet, Raymond T; Hoekstra, Tiny; Schwartz, George J; Eckfeldt, John H; Coresh, Josef

    2016-05-01

    Residual kidney function contributes substantially to solute clearance in dialysis patients but cannot be assessed without urine collection. We used serum filtration markers to develop dialysis-specific equations to estimate urinary urea clearance without the need for urine collection. In our development cohort, we measured 24-hour urine clearances under close supervision in 44 patients and validated these equations in 826 patients from the Netherlands Cooperative Study on the Adequacy of Dialysis. For the development and validation cohorts, median urinary urea clearance was 2.6 and 2.4 ml/min, respectively. During the 24-hour visit in the development cohort, serum β-trace protein concentrations remained in steady state but concentrations of all other markers increased. In the validation cohort, bias (median measured minus estimated clearance) was low for all equations. Precision was significantly better for β-trace protein and β2-microglobulin equations and the accuracy was significantly greater for β-trace protein, β2-microglobulin, and cystatin C equations, compared with the urea plus creatinine equation. Area under the receiver operator characteristic curve for detecting measured urinary urea clearance by equation-estimated urinary urea clearance (both 2 ml/min or more) were 0.821, 0.850, and 0.796 for β-trace protein, β2-microglobulin, and cystatin C equations, respectively; significantly greater than the 0.663 for the urea plus creatinine equation. Thus, residual renal function can be estimated in dialysis patients without urine collections. Copyright © 2016 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  1. An empirical comparison of methods for analyzing correlated data from a discrete choice survey to elicit patient preference for colorectal cancer screening

    PubMed Central

    2012-01-01

    Background A discrete choice experiment (DCE) is a preference survey which asks participants to make a choice among product portfolios comparing the key product characteristics by performing several choice tasks. Analyzing DCE data needs to account for within-participant correlation because choices from the same participant are likely to be similar. In this study, we empirically compared some commonly-used statistical methods for analyzing DCE data while accounting for within-participant correlation based on a survey of patient preference for colorectal cancer (CRC) screening tests conducted in Hamilton, Ontario, Canada in 2002. Methods A two-stage DCE design was used to investigate the impact of six attributes on participants' preferences for CRC screening test and willingness to undertake the test. We compared six models for clustered binary outcomes (logistic and probit regressions using cluster-robust standard error (SE), random-effects and generalized estimating equation approaches) and three models for clustered nominal outcomes (multinomial logistic and probit regressions with cluster-robust SE and random-effects multinomial logistic model). We also fitted a bivariate probit model with cluster-robust SE treating the choices from two stages as two correlated binary outcomes. The rank of relative importance between attributes and the estimates of β coefficient within attributes were used to assess the model robustness. Results In total 468 participants with each completing 10 choices were analyzed. Similar results were reported for the rank of relative importance and β coefficients across models for stage-one data on evaluating participants' preferences for the test. The six attributes ranked from high to low as follows: cost, specificity, process, sensitivity, preparation and pain. However, the results differed across models for stage-two data on evaluating participants' willingness to undertake the tests. Little within-patient correlation (ICC ≈ 0) was found in stage-one data, but substantial within-patient correlation existed (ICC = 0.659) in stage-two data. Conclusions When small clustering effect presented in DCE data, results remained robust across statistical models. However, results varied when larger clustering effect presented. Therefore, it is important to assess the robustness of the estimates via sensitivity analysis using different models for analyzing clustered data from DCE studies. PMID:22348526

  2. [Comparison of three stand-level biomass estimation methods].

    PubMed

    Dong, Li Hu; Li, Feng Ri

    2016-12-01

    At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.

  3. A new modified CKD-EPI equation for Chinese patients with type 2 diabetes.

    PubMed

    Liu, Xun; Gan, Xiaoliang; Chen, Jinxia; Lv, Linsheng; Li, Ming; Lou, Tanqi

    2014-01-01

    To improve the performance of glomerular filtration rate (GFR) estimating equation in Chinese type 2 diabetic patients by modification of the CKD-EPI equation. A total of 1196 subjects were enrolled. Measured GFR was calibrated to the dual plasma sample 99mTc-DTPA-GFR. GFRs estimated by the re-expressed 4-variable MDRD equation, the CKD-EPI equation and the Asian modified CKD-EPI equation were compared in 351 diabetic/non-diabetic pairs. And a new modified CKD-EPI equation was reconstructed in a total of 589 type 2 diabetic patients. In terms of both precision and accuracy, GFR estimating equations all achieved better results in the non-diabetic cohort comparing with those in the type 2 diabetic cohort (30% accuracy, P≤0.01 for all comparisons). In the validation data set, the new modified equation showed less bias (median difference, 2.3 ml/min/1.73 m2 for the new modified equation vs. ranged from -3.8 to -7.9 ml/min/1.73 m2 for the other 3 equations [P<0.001 for all comparisons]), as was precision (IQR of the difference, 24.5 ml/min/1.73 m2 vs. ranged from 27.3 to 30.7 ml/min/1.73 m2), leading to a greater accuracy (30% accuracy, 71.4% vs. 55.2% for the re-expressed 4 variable MDRD equation and 61.0% for the Asian modified CKD-EPI equation [P = 0.001 and P = 0.02]). A new modified CKD-EPI equation for type 2 diabetic patients was developed and validated. The new modified equation improves the performance of GFR estimation.

  4. Performance of Creatinine and Cystatin C GFR Estimating Equations in an HIV-positive population on Antiretrovirals

    PubMed Central

    INKER, Lesley A; WYATT, Christina; CREAMER, Rebecca; HELLINGER, James; HOTTA, Matthew; LEPPO, Maia; LEVEY, Andrew S; OKPARAVERO, Aghogho; GRAHAM, Hiba; SAVAGE, Karen; SCHMID, Christopher H; TIGHIOUART, Hocine; WALLACH, Fran; KRISHNASAMI, Zipporah

    2013-01-01

    Objective To evaluate the performance of CKD-EPI creatinine, cystatin C and creatinine-cystatin C estimating equations in HIV-positive patients. Methods We evaluated the performance of the MDRD Study and CKD-EPI creatinine 2009, CKD-EPI cystatin C 2012 and CKD-EPI creatinine-cystatin C 2012 glomerular filtration rate (GFR) estimating equations compared to GFR measured using plasma clearance of iohexol in 200 HIV-positive patients on stable antiretroviral therapy. Creatinine and cystatin C assays were standardized to certified reference materials. Results Of the 200 participants, median (IQR) CD4 count was 536 (421) and 61% had an undetectable HIV-viral load. Mean (SD) measured GFR (mGFR) was 87 (26) ml/min/1.73m2. All CKD-EPI equations performed better than the MDRD Study equation. All three CKD-EPI equations had similar bias and precision. The cystatin C equation was not more accurate than the creatinine equation. The creatinine-cystatin C equation was significantly more accurate than the cystatin C equation and there was a trend toward greater accuracy than the creatinine equation. Accuracy was equal or better in most subgroups with the combined equation compared to either alone. Conclusions The CKD-EPI cystatin C equation does not appear to be more accurate than the CKD-EPI creatinine equation in patients who are HIV-positive, supporting the use of the CKD-EPI creatinine equation for routine clinical care for use in North American populations with HIV. The use of both filtration markers together as a confirmatory test for decreased estimated GFR based on creatinine in individuals who are HIV-positive requires further study. PMID:22842844

  5. Biomass equations for major tree species of the Northeast

    Treesearch

    Louise M. Tritton; James W. Hornbeck

    1982-01-01

    Regression equations are used in both forestry and ecosystem studies to estimate tree biomass from field measurements of dbh (diameter at breast height) or a combination of dbh and height. Literature on biomass is reviewed, and 178 sets of publish equation for 25 species common to the Northeastern Unites States are listed. On the basis of these equations, estimates of...

  6. New body fat prediction equations for severely obese patients.

    PubMed

    Horie, Lilian Mika; Barbosa-Silva, Maria Cristina Gonzalez; Torrinhas, Raquel Susana; de Mello, Marco Túlio; Cecconello, Ivan; Waitzberg, Dan Linetzky

    2008-06-01

    Severe obesity imposes physical limitations to body composition assessment. Our aim was to compare body fat (BF) estimations of severely obese patients obtained by bioelectrical impedance (BIA) and air displacement plethysmography (ADP) for development of new equations for BF prediction. Severely obese subjects (83 female/36 male, mean age=41.6+/-11.6 years) had BF estimated by BIA and ADP. The agreement of the data was evaluated using Bland-Altman's graphic and concordance correlation coefficient (CCC). A multivariate regression analysis was performed to develop and validate new predictive equations. BF estimations from BIA (64.8+/-15 kg) and ADP (65.6+/-16.4 kg) did not differ (p>0.05, with good accuracy, precision, and CCC), but the Bland- Altman graphic showed a wide limit of agreement (-10.4; 8.8). The standard BIA equation overestimated BF in women (-1.3 kg) and underestimated BF in men (5.6 kg; p<0.05). Two BF new predictive equations were generated after BIA measurement, which predicted BF with higher accuracy, precision, CCC, and limits of agreement than the standard BIA equation. Standard BIA equations were inadequate for estimating BF in severely obese patients. Equations developed especially for this population provide more accurate BF assessment.

  7. Accuracy of Anthropometric Equations for Estimating Body Fat in Professional Male Soccer Players Compared with DXA

    PubMed Central

    López-Taylor, Juan R.; Jiménez-Alvarado, Juan Antonio; Villegas-Balcázar, Marisol; Jáuregui-Ulloa, Edtna E.; Torres-Naranjo, Francisco

    2018-01-01

    Background There are several published anthropometric equations to estimate body fat percentage (BF%), and this may prompt uncertainty about their application. Purpose To analyze the accuracy of several anthropometric equations (developed in athletic [AT] and nonathletic [NAT] populations) that estimate BF% comparing them with DXA. Methods We evaluated 131 professional male soccer players (body mass: 73.2 ± 8.0 kg; height: 177.5 ± 5.8 cm; DXA BF% [median, 25th–75th percentile]: 14.0, 11.9–16.4%) aged 18 to 37 years. All subjects were evaluated with anthropometric measurements and a whole body DXA scan. BF% was estimated through 14 AT and 17 NAT anthropometric equations and compared with the measured DXA BF%. Mean differences and 95% limits of agreement were calculated for those anthropometric equations without significant differences with DXA. Results Five AT and seven NAT anthropometric equations did not differ significantly with DXA. From these, Oliver's and Civar's (AT) and Ball's and Wilmore's (NAT) equations showed the highest agreement with DXA. Their 95% limits of agreement ranged from −3.9 to 2.3%, −4.8 to 1.8%, −3.4 to 3.1%, and −3.9 to 3.0%, respectively. Conclusion Oliver's, Ball's, Civar's, and Wilmore's equations were the best to estimate BF% accurately compared with DXA in professional male soccer players. PMID:29736402

  8. Assessment of the agreement between the Framingham and DAD risk equations for estimating cardiovascular risk in adult Africans living with HIV infection: a cross-sectional study.

    PubMed

    Noumegni, Steve Raoul; Ama, Vicky Jocelyne Moor; Assah, Felix K; Bigna, Jean Joel; Nansseu, Jobert Richie; Kameni, Jenny Arielle M; Katte, Jean-Claude; Dehayem, Mesmin Y; Kengne, Andre Pascal; Sobngwi, Eugene

    2017-01-01

    The Absolute cardiovascular disease (CVD) risk evaluation using multivariable CVD risk models is increasingly advocated in people with HIV, in whom existing models remain largely untested. We assessed the agreement between the general population derived Framingham CVD risk equation and the HIV-specific Data collection on Adverse effects of anti-HIV Drugs (DAD) CVD risk equation in HIV-infected adult Cameroonians. This cross-sectional study involved 452 HIV infected adults recruited at the HIV day-care unit of the Yaoundé Central Hospital, Cameroon. The 5-year projected CVD risk was estimated for each participant using the DAD and Framingham CVD risk equations. Agreement between estimates from these equations was assessed using the spearman correlation and Cohen's kappa coefficient. The mean age of participants (80% females) was 44.4 ± 9.8 years. Most participants (88.5%) were on antiretroviral treatment with 93.3% of them receiving first-line regimen. The most frequent cardiovascular risk factors were abdominal obesity (43.1%) and dyslipidemia (33.8%). The median estimated 5-year CVD risk was 0.6% (25th-75th percentiles: 0.3-1.3) using the DAD equation and 0.7% (0.2-2.0) with the Framingham equation. The Spearman correlation between the two estimates was 0.93 ( p  < 0.001). The kappa statistic was 0.61 (95% confident interval: 0.54-0.67) for the agreement between the two equations in classifying participants across risk categories defined as low, moderate, high and very high. Most participants had a low-to-moderate estimated CVD risk, with acceptable level of agreement between the general and HIV-specific equations in ranking CVD risk.

  9. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  10. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  11. 78 FR 59342 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-26

    ... related elements of logistical and program support. (iv) Military Department: Air Force (QAI) (v) Prior... contractor engineering, technical and logistics support services, and other related elements of logistical and program support. The estimated cost is $60 million. This proposed sale will contribute to the...

  12. Alternatives for the Bedside Schwartz Equation to Estimate Glomerular Filtration Rate in Children.

    PubMed

    Pottel, Hans; Dubourg, Laurence; Goffin, Karolien; Delanaye, Pierre

    2018-01-01

    The bedside Schwartz equation has long been and still is the recommended equation to estimate glomerular filtration rate (GFR) in children. However, this equation is probably best suited to estimate GFR in children with chronic kidney disease (reduced GFR) but is not optimal for children with GFR >75 mL/min/1.73 m 2 . Moreover, the Schwartz equation requires the height of the child, information that is usually not available in the clinical laboratory. This makes automatic reporting of estimated glomerular filtration rate (eGFR) along with serum creatinine impossible. As the majority of children (even children referred to nephrology clinics) have GFR >75 mL/min/1.73 m 2 , it might be interesting to evaluate possible alternatives to the bedside Schwartz equation. The pediatric form of the Full Age Spectrum (FAS) equation offers an alternative to Schwartz, allowing automatic reporting of eGFR since height is not necessary. However, when height is involved in the FAS equation, the equation is essentially equal to the Schwartz equation for children, but there are large differences for adolescents. Combining standardized biomarkers increases the prediction performance of eGFR equations for children, reaching P10 ≈ 45% and P30 ≈ 90%. There are currently good and simple alternatives to the bedside Schwartz equation, but the more complex equations combining serum creatinine, serum cystatin C, and height show the highest accuracy and precision. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  13. Discrete Kalman filtering equations of second-order form for control-structure interaction simulations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, K. F.; Belvin, W. Keith

    1991-01-01

    A second-order form of discrete Kalman filtering equations is proposed as a candidate state estimator for efficient simulations of control-structure interactions in coupled physical coordinate configurations as opposed to decoupled modal coordinates. The resulting matrix equation of the present state estimator consists of the same symmetric, sparse N x N coupled matrices of the governing structural dynamics equations as opposed to unsymmetric 2N x 2N state space-based estimators. Thus, in addition to substantial computational efficiency improvement, the present estimator can be applied to control-structure design optimization for which the physical coordinates associated with the mass, damping and stiffness matrices of the structure are needed instead of modal coordinates.

  14. Estimating design-flood discharges for streams in Iowa using drainage-basin and channel-geometry characteristics

    USGS Publications Warehouse

    Eash, D.A.

    1993-01-01

    Procedures provided for applying the drainage-basin and channel-geometry regression equations depend on whether the design-flood discharge estimate is for a site on an ungaged stream, an ungaged site on a gaged stream, or a gaged site. When both a drainage-basin and a channel-geometry regression-equation estimate are available for a stream site, a procedure is presented for determining a weighted average of the two flood estimates. The drainage-basin regression equations are applicable to unregulated rural drainage areas less than 1,060 square miles, and the channel-geometry regression equations are applicable to unregulated rural streams in Iowa with stabilized channels.

  15. An allometric scaling relation based on logistic growth of cities

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang

    2014-08-01

    The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed "exponential allometry", which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the abovementioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed "logistic allometry". The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective.

  16. Estimating GFR using Serum Cystatin C Alone and in Combination with Serum Creatinine: A Pooled Analysis of 3418 Individuals with CKD

    PubMed Central

    Stevens, Lesley A; Coresh, Josef; Schmid, Christopher H; Feldman, Harold I.; Froissart, Marc; Kusek, John; Rossert, Jerome; Van Lente, Frederick; Bruce, Robert D.; Zhang, Yaping (Lucy); Greene, Tom; Levey, Andrew S

    2008-01-01

    Background Serum cystatin C (Scys) has been proposed as a potential replacement for serum creatinine (Scr) in glomerular filtration rate (GFR) estimation. We report development and evaluation of GFR estimating equations using Scys alone and Scys, Scr or both with demographic variables. Study Design Test of diagnostic accuracy. Setting and Participants Participants screened for three chronic kidney disease (CKD) studies in the US (n=2980) and a clinical population in Paris, France (n=438) Reference Test Measured GFR (mGFR). Index Test Estimated GFR using the four new equations based on Scys alone, Scys, Scr or both with age, sex and race. New equations were developed using regression with log GFR as the outcome in 2/3 data from US studies. Internal validation was performed in remaining 1/3 of data from US CKD studies; external validation was performed in the Paris study. Measurements GFR was measured using urinary clearance of 125I-iothalamate in the US studies and chromium-ethylenediaminetetraacetate (51Cr-EDTA) in the Paris study. Scys was measured by Dade Behring assay, standardized Scr. Results Mean mGFR, Scr and Scys were 48 (5th–95th percentile 15–95) ml/min/1.73m2 2.1 mg/dL and 1.8 mg/L respectively. For the new equations, the coefficients for age, sex and race were significant in the equation with Scys but 2 to 4 fold smaller than in the equation with Scr. Measures of performance among new equations were consistent across development, internal and external validation datasets. Percent of eGFR within 30% of mGFR for equations based on Scys alone, Scys, Scr or both with age, sex and race were 81, 83, 85, and 89%, respectively. The equation using Scys alone yields estimates with small biases in age, sex and race subgroups, which are improved in equations including these variables. Limitations Study population composed mainly of patients with CKD. Conclusions Scys alone provides GFR estimates that are nearly as accurate as Scr adjusted for age, sex and race thus providing an alternative GFR estimate that is not linked to muscle mass. An equation including Scys in combination with Scr, age, sex and race provide most accurate estimates. PMID:18295055

  17. Estimate of body composition by Hume's equation: validation with DXA.

    PubMed

    Carnevale, Vincenzo; Piscitelli, Pamela Angela; Minonne, Rita; Castriotta, Valeria; Cipriani, Cristiana; Guglielmi, Giuseppe; Scillitani, Alfredo; Romagnoli, Elisabetta

    2015-05-01

    We investigated how the Hume's equation, using the antipyrine space, could perform in estimating fat mass (FM) and lean body mass (LBM). In 100 (40 male ad 60 female) subjects, we estimated FM and LBM by the equation and compared these values with those measured by a last generation DXA device. The correlation coefficients between measured and estimated FM were r = 0.940 (p < 0.0001) and between measured and estimated LBM were r = 0.913 (p < 0.0001). The Bland-Altman plots demonstrated a fair agreement between estimated and measured FM and LBM, though the equation underestimated FM and overestimated LBM in respect to DXA. The mean difference for FM was 1.40 kg (limits of agreement of -6.54 and 8.37 kg). For LBM, the mean difference in respect to DXA was 1.36 kg (limits of agreement -8.26 and 6.52 kg). The root mean square error was 3.61 kg for FM and 3.56 kg for LBM. Our results show that in clinically stable subjects the Hume's equation could reliably assess body composition, and the estimated FM and LBM approached those measured by a modern DXA device.

  18. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy

    PubMed Central

    GURKA, MATTHEW J; KUPERMINC, MICHELLE N; BUSBY, MARJORIE G; BENNIS, JACEY A; GROSSBERG, RICHARD I; HOULIHAN, CHRISTINE M; STEVENSON, RICHARD D; HENDERSON, RICHARD C

    2010-01-01

    AIM To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). METHOD Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I–V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. RESULTS Slaughter’s equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat −9.6/100 [SD 6.2]; 95% confidence interval [CI] −11.0 to −8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI −1.0 to 1.3) than existing equations. INTERPRETATION A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP. PMID:19811518

  19. Best Fitting Prediction Equations for Basal Metabolic Rate: Informing Obesity Interventions in Diverse Populations

    PubMed Central

    Sabounchi, Nasim S.; Rahmandad, Hazhir; Ammerman, Alice

    2014-01-01

    Basal Metabolic Rate (BMR) represents the largest component of total energy expenditure and is a major contributor to energy balance. Therefore, accurately estimating BMR is critical for developing rigorous obesity prevention and control strategies. Over the past several decades, numerous BMR formulas have been developed targeted to different population groups. A comprehensive literature search revealed 248 BMR estimation equations developed using diverse ranges of age, gender, race, fat free mass, fat mass, height, waist-to-hip ratio, body mass index, and weight. A subset of 47 studies included enough detail to allow for development of meta-regression equations. Utilizing these studies, meta-equations were developed targeted to twenty specific population groups. This review provides a comprehensive summary of available BMR equations and an estimate of their accuracy. An accompanying online BMR prediction tool (available at http://www.sdl.ise.vt.edu/tutorials.html) was developed to automatically estimate BMR based on the most appropriate equation after user-entry of individual age, race, gender, and weight. PMID:23318720

  20. Validation of equations for pleural effusion volume estimation by ultrasonography.

    PubMed

    Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed

    2017-12-01

    To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H  +  D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H  × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.

  1. Creatinine Clearance Is Not Equal to Glomerular Filtration Rate and Cockcroft-Gault Equation Is Not Equal to CKD-EPI Collaboration Equation.

    PubMed

    Fernandez-Prado, Raul; Castillo-Rodriguez, Esmeralda; Velez-Arribas, Fernando Javier; Gracia-Iguacel, Carolina; Ortiz, Alberto

    2016-12-01

    Direct oral anticoagulants (DOACs) may require dose reduction or avoidance when glomerular filtration rate is low. However, glomerular filtration rate is not usually measured in routine clinical practice. Rather, equations that incorporate different variables use serum creatinine to estimate either creatinine clearance in mL/min or glomerular filtration rate in mL/min/1.73 m 2 . The Cockcroft-Gault equation estimates creatinine clearance and incorporates weight into the equation. By contrast, the Modification of Diet in Renal Disease and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations estimate glomerular filtration rate and incorporate ethnicity but not weight. As a result, an individual patient may have very different renal function estimates, depending on the equation used. We now highlight these differences and discuss the impact on routine clinical care for anticoagulation to prevent embolization in atrial fibrillation. Pivotal DOAC clinical trials used creatinine clearance as a criterion for patient enrollment, and dose adjustment and Federal Drug Administration recommendations are based on creatinine clearance. However, clinical biochemistry laboratories provide CKD-EPI glomerular filtration rate estimations, resulting in discrepancies between clinical trial and routine use of the drugs. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Comparison of total body water estimates from O-18 and bioelectrical response prediction equations

    NASA Technical Reports Server (NTRS)

    Barrows, Linda H.; Inners, L. Daniel; Stricklin, Marcella D.; Klein, Peter D.; Wong, William W.; Siconolfi, Steven F.

    1993-01-01

    Identification of an indirect, rapid means to measure total body water (TBW) during space flight may aid in quantifying hydration status and assist in countermeasure development. Bioelectrical response testing and hydrostatic weighing were performed on 27 subjects who ingested O-18, a naturally occurring isotope of oxygen, to measure true TBW. TBW estimates from three bioelectrical response prediction equations and fat-free mass (FFM) were compared to TBW measured from O-18. A repeated measures MANOVA with post-hoc Dunnett's Test indicated a significant (p less than 0.05) difference between TBW estimates from two of the three bioelectrical response prediction equations and O-18. TBW estimates from FFM and the Kushner & Schoeller (1986) equation yielded results that were similar to those given by O-18. Strong correlations existed between each prediction method and O-18; however, standard errors, identified through regression analyses, were higher for the bioelectrical response prediction equations compared to those derived from FFM. These findings suggest (1) the Kushner & Schoeller (1986) equation may provide a valid measure of TBW, (2) other TBW prediction equations need to be identified that have variability similar to that of FFM, and (3) bioelectrical estimates of TBW may prove valuable in quantifying hydration status during space flight.

  3. The building blocks of a 'Liveable Neighbourhood': Identifying the key performance indicators for walking of an operational planning policy in Perth, Western Australia.

    PubMed

    Hooper, Paula; Knuiman, Matthew; Foster, Sarah; Giles-Corti, Billie

    2015-11-01

    Planning policy makers are requesting clearer guidance on the key design features required to build neighbourhoods that promote active living. Using a backwards stepwise elimination procedure (logistic regression with generalised estimating equations adjusting for demographic characteristics, self-selection factors, stage of construction and scale of development) this study identified specific design features (n=16) from an operational planning policy ("Liveable Neighbourhoods") that showed the strongest associations with walking behaviours (measured using the Neighbourhood Physical Activity Questionnaire). The interacting effects of design features on walking behaviours were also investigated. The urban design features identified were grouped into the "building blocks of a Liveable Neighbourhood", reflecting the scale, importance and sequencing of the design and implementation phases required to create walkable, pedestrian friendly developments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Analytical solution of Luedeking-Piret equation for a batch fermentation obeying Monod growth kinetics.

    PubMed

    Garnier, Alain; Gaillet, Bruno

    2015-12-01

    Not so many fermentation mathematical models allow analytical solutions of batch process dynamics. The most widely used is the combination of the logistic microbial growth kinetics with Luedeking-Piret bioproduct synthesis relation. However, the logistic equation is principally based on formalistic similarities and only fits a limited range of fermentation types. In this article, we have developed an analytical solution for the combination of Monod growth kinetics with Luedeking-Piret relation, which can be identified by linear regression and used to simulate batch fermentation evolution. Two classical examples are used to show the quality of fit and the simplicity of the method proposed. A solution for the combination of Haldane substrate-limited growth model combined with Luedeking-Piret relation is also provided. These models could prove useful for the analysis of fermentation data in industry as well as academia. © 2015 Wiley Periodicals, Inc.

  5. Accuracy of an equation for estimating age from mandibular third molar development in a Thai population

    PubMed Central

    Verochana, Karune; Prapayasatok, Sangsom; Mahasantipiya, Phattaranant May; Korwanich, Narumanas

    2016-01-01

    Purpose This study assessed the accuracy of age estimates produced by a regression equation derived from lower third molar development in a Thai population. Materials and Methods The first part of this study relied on measurements taken from panoramic radiographs of 614 Thai patients aged from 9 to 20. The stage of lower left and right third molar development was observed in each radiograph and a modified Gat score was assigned. Linear regression on this data produced the following equation: Y=9.309+1.673 mG+0.303S (Y=age; mG=modified Gat score; S=sex). In the second part of this study, the predictive accuracy of this equation was evaluated using data from a second set of panoramic radiographs (539 Thai subjects, 9 to 24 years old). Each subject's age was estimated using the above equation and compared against age calculated from a provided date of birth. Estimated and known age data were analyzed using the Pearson correlation coefficient and descriptive statistics. Results Ages estimated from lower left and lower right third molar development stage were significantly correlated with the known ages (r=0.818, 0.808, respectively, P≤0.01). 50% of age estimates in the second part of the study fell within a range of error of ±1 year, while 75% fell within a range of error of ±2 years. The study found that the equation tends to estimate age accurately when individuals are 9 to 20 years of age. Conclusion The equation can be used for age estimation for Thai populations when the individuals are 9 to 20 years of age. PMID:27051633

  6. Accuracy of an equation for estimating age from mandibular third molar development in a Thai population.

    PubMed

    Verochana, Karune; Prapayasatok, Sangsom; Janhom, Apirum; Mahasantipiya, Phattaranant May; Korwanich, Narumanas

    2016-03-01

    This study assessed the accuracy of age estimates produced by a regression equation derived from lower third molar development in a Thai population. The first part of this study relied on measurements taken from panoramic radiographs of 614 Thai patients aged from 9 to 20. The stage of lower left and right third molar development was observed in each radiograph and a modified Gat score was assigned. Linear regression on this data produced the following equation: Y=9.309+1.673 mG+0.303S (Y=age; mG=modified Gat score; S=sex). In the second part of this study, the predictive accuracy of this equation was evaluated using data from a second set of panoramic radiographs (539 Thai subjects, 9 to 24 years old). Each subject's age was estimated using the above equation and compared against age calculated from a provided date of birth. Estimated and known age data were analyzed using the Pearson correlation coefficient and descriptive statistics. Ages estimated from lower left and lower right third molar development stage were significantly correlated with the known ages (r=0.818, 0.808, respectively, P≤0.01). 50% of age estimates in the second part of the study fell within a range of error of ±1 year, while 75% fell within a range of error of ±2 years. The study found that the equation tends to estimate age accurately when individuals are 9 to 20 years of age. The equation can be used for age estimation for Thai populations when the individuals are 9 to 20 years of age.

  7. Chronic Kidney Disease Epidemiology Collaboration versus Modification of Diet in Renal Disease equations for renal function evaluation in patients undergoing partial nephrectomy.

    PubMed

    Shikanov, Sergey; Clark, Melanie A; Raman, Jay D; Smith, Benjamin; Kaag, Matthew; Russo, Paul; Wheat, Jeffrey C; Wolf, J Stuart; Huang, William C; Shalhav, Arieh L; Eggener, Scott E

    2010-11-01

    A novel equation, the Chronic Kidney Disease Epidemiology Collaboration, has been proposed to replace the Modification of Diet in Renal Disease for estimated glomerular filtration rate due to higher accuracy, particularly in the setting of normal renal function. We compared these equations in patients with 2 functioning kidneys undergoing partial nephrectomy. We assembled a cohort of 1,158 patients from 5 institutions who underwent partial nephrectomy between 1991 and 2009. Only subjects with 2 functioning kidneys were included in the study. The end points were baseline estimated glomerular filtration rate, last followup estimated glomerular filtration rate (3 to 18 months), absolute and percent change estimated glomerular filtration rate ([absolute change/baseline] × 100%), and proportion of newly developed chronic kidney disease stage III. The agreement between the equations was evaluated using Bland-Altman plots and the McNemar test for paired observations. Mean baseline estimated glomerular filtration rate derived from the Modification of Diet in Renal Disease and Chronic Kidney Disease Epidemiology Collaboration equations were 73 and 77 ml/minute/1.73 m(2), respectively, and following surgery were 63 and 67 ml/minute/1.73 m(2), respectively. Mean percent change estimated glomerular filtration rate was -12% for both equations (p = 0.2). The proportion of patients with newly developed chronic kidney disease stage III following surgery was 32% and 25%, according to the Modification of Diet in Renal Disease and Chronic Kidney Disease Epidemiology Collaboration equations, respectively (p = 0.001). For patients with 2 functioning kidneys undergoing partial nephrectomy the Chronic Kidney Disease Epidemiology Collaboration equation provides slightly higher glomerular filtration rate estimates compared to the Modification of Diet in Renal Disease equation, with 7% fewer patients categorized as having chronic kidney disease stage III or worse. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  8. National scale biomass estimators for United States tree species

    Treesearch

    Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey

    2003-01-01

    Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...

  9. Twig and foliar biomass estimation equations for major plant species in the Tanana River Basin of interior Alaska.

    Treesearch

    John Yarie; Bert R. Mead

    1988-01-01

    Equations are presented for estimating the twig, foliage, and combined biomass for 58 plant species in interior Alaska. The equations can be used for estimating biomass from percentage of foliar cover of 10-centimeter layers in a vertical profile from 0 to 6 meters. Few differences were found in regressions of the same species between layers except when the ratio of...

  10. Weight estimation techniques for composite airplanes in general aviation industry

    NASA Technical Reports Server (NTRS)

    Paramasivam, T.; Horn, W. J.; Ritter, J.

    1986-01-01

    Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.

  11. Journal: A Review of Some Tracer-Test Design Equations for ...

    EPA Pesticide Factsheets

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-

  12. A one-step method for modelling longitudinal data with differential equations.

    PubMed

    Hu, Yueqin; Treinen, Raymond

    2018-04-06

    Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers' shopping behaviour after a sale promotion, and to a set of public data tracking participants' grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed. © 2018 The British Psychological Society.

  13. Use and interpretation of logistic regression in habitat-selection studies

    USGS Publications Warehouse

    Keating, Kim A.; Cherry, Steve

    2004-01-01

     Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.

  14. Estimating Glomerular Filtration Rate in Kidney Transplant Recipients: Comparing a Novel Equation With Commonly Used Equations in this Population

    PubMed Central

    Salvador, Cathrin L.; Hartmann, Anders; Åsberg, Anders; Bergan, Stein; Rowe, Alexander D.; Mørkrid, Lars

    2017-01-01

    Background Assessment of glomerular filtration rate (GFR) is important in kidney transplantation. The aim was to develop a kidney transplant specific equation for estimating GFR and evaluate against published equations commonly used for GFR estimation in these patients. Methods Adult kidney recipients (n = 594) were included, and blood samples were collected 10 weeks posttransplant. GFR was measured by 51Cr-ethylenediaminetetraacetic acid clearance. Patients were randomized into a reference group (n = 297) to generate a new equation and a test group (n = 297) for comparing it with 7 alternative equations. Results Two thirds of the test group were males. The median (2.5-97.5 percentile) age was 52 (23-75) years, cystatin C, 1.63 (1.00-3.04) mg/L; creatinine, 117 (63-220) μmol/L; and measured GFR, 51 (29-78) mL/min per 1.73 m2. We also performed external evaluation in 133 recipients without the use of trimethoprim, using iohexol clearance for measured GFR. The Modification of Diet in Renal Disease equation was the most accurate of the creatinine-equations. The new equation, estimated GFR (eGFR) = 991.15 × (1.120sex/([age0.097] × [cystatin C0.306] × [creatinine0.527]); where sex is denoted: 0, female; 1, male, demonstrating a better accuracy with a low bias as well as good precision compared with reference equations. Trimethoprim did not influence the performance of the new equation. Conclusions The new equation demonstrated superior accuracy, precision, and low bias. The Modification of Diet in Renal Disease equation was the most accurate of the creatinine-based equations. PMID:29536033

  15. 78 FR 54242 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-03

    ... elements of logistical and program support. The estimated cost is $1.2 billion. This proposed sale will... support services, and other related elements of logistical and program support. (iv) Military Department... logistical support to sustain the combat and operational readiness of its existing aircraft fleet. The...

  16. Estimating mean change in population salt intake using spot urine samples.

    PubMed

    Petersen, Kristina S; Wu, Jason H Y; Webster, Jacqui; Grimes, Carley; Woodward, Mark; Nowson, Caryl A; Neal, Bruce

    2017-10-01

    Spot urine samples are easier to collect than 24-h urine samples and have been used with estimating equations to derive the mean daily salt intake of a population. Whether equations using data from spot urine samples can also be used to estimate change in mean daily population salt intake over time is unknown. We compared estimates of change in mean daily population salt intake based upon 24-h urine collections with estimates derived using equations based on spot urine samples. Paired and unpaired 24-h urine samples and spot urine samples were collected from individuals in two Australian populations, in 2011 and 2014. Estimates of change in daily mean population salt intake between 2011 and 2014 were obtained directly from the 24-h urine samples and by applying established estimating equations (Kawasaki, Tanaka, Mage, Toft, INTERSALT) to the data from spot urine samples. Differences between 2011 and 2014 were calculated using mixed models. A total of 1000 participants provided a 24-h urine sample and a spot urine sample in 2011, and 1012 did so in 2014 (paired samples n = 870; unpaired samples n = 1142). The participants were community-dwelling individuals living in the State of Victoria or the town of Lithgow in the State of New South Wales, Australia, with a mean age of 55 years in 2011. The mean (95% confidence interval) difference in population salt intake between 2011 and 2014 determined from the 24-h urine samples was -0.48g/day (-0.74 to -0.21; P < 0.001). The corresponding result estimated from the spot urine samples was -0.24 g/day (-0.42 to -0.06; P = 0.01) using the Tanaka equation, -0.42 g/day (-0.70 to -0.13; p = 0.004) using the Kawasaki equation, -0.51 g/day (-1.00 to -0.01; P = 0.046) using the Mage equation, -0.26 g/day (-0.42 to -0.10; P = 0.001) using the Toft equation, -0.20 g/day (-0.32 to -0.09; P = 0.001) using the INTERSALT equation and -0.27 g/day (-0.39 to -0.15; P < 0.001) using the INTERSALT equation with potassium. There was no evidence that the changes detected by the 24-h collections and estimating equations were different (all P > 0.058). Separate analysis of the unpaired and paired data showed that detection of change by the estimating equations was observed only in the paired data. All the estimating equations based upon spot urine samples identified a similar change in daily salt intake to that detected by the 24-h urine samples. Methods based upon spot urine samples may provide an approach to measuring change in mean population salt intake, although further investigation in larger and more diverse population groups is required. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association

  17. Comparative evaluation of urban storm water quality models

    NASA Astrophysics Data System (ADS)

    Vaze, J.; Chiew, Francis H. S.

    2003-10-01

    The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.

  18. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  19. Comparison of risk prediction using the CKD-EPI equation and the MDRD study equation for estimated glomerular filtration rate.

    PubMed

    Matsushita, Kunihiro; Mahmoodi, Bakhtawar K; Woodward, Mark; Emberson, Jonathan R; Jafar, Tazeen H; Jee, Sun Ha; Polkinghorne, Kevan R; Shankar, Anoop; Smith, David H; Tonelli, Marcello; Warnock, David G; Wen, Chi-Pang; Coresh, Josef; Gansevoort, Ron T; Hemmelgarn, Brenda R; Levey, Andrew S

    2012-05-09

    The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation more accurately estimates glomerular filtration rate (GFR) than the Modification of Diet in Renal Disease (MDRD) Study equation using the same variables, especially at higher GFR, but definitive evidence of its risk implications in diverse settings is lacking. To evaluate risk implications of estimated GFR using the CKD-EPI equation compared with the MDRD Study equation in populations with a broad range of demographic and clinical characteristics. A meta-analysis of data from 1.1 million adults (aged ≥ 18 years) from 25 general population cohorts, 7 high-risk cohorts (of vascular disease), and 13 CKD cohorts. Data transfer and analyses were conducted between March 2011 and March 2012. All-cause mortality (84,482 deaths from 40 cohorts), cardiovascular mortality (22,176 events from 28 cohorts), and end-stage renal disease (ESRD) (7644 events from 21 cohorts) during 9.4 million person-years of follow-up; the median of mean follow-up time across cohorts was 7.4 years (interquartile range, 4.2-10.5 years). Estimated GFR was classified into 6 categories (≥90, 60-89, 45-59, 30-44, 15-29, and <15 mL/min/1.73 m(2)) by both equations. Compared with the MDRD Study equation, 24.4% and 0.6% of participants from general population cohorts were reclassified to a higher and lower estimated GFR category, respectively, by the CKD-EPI equation, and the prevalence of CKD stages 3 to 5 (estimated GFR <60 mL/min/1.73 m(2)) was reduced from 8.7% to 6.3%. In estimated GFR of 45 to 59 mL/min/1.73 m(2) by the MDRD Study equation, 34.7% of participants were reclassified to estimated GFR of 60 to 89 mL/min/1.73 m(2) by the CKD-EPI equation and had lower incidence rates (per 1000 person-years) for the outcomes of interest (9.9 vs 34.5 for all-cause mortality, 2.7 vs 13.0 for cardiovascular mortality, and 0.5 vs 0.8 for ESRD) compared with those not reclassified. The corresponding adjusted hazard ratios were 0.80 (95% CI, 0.74-0.86) for all-cause mortality, 0.73 (95% CI, 0.65-0.82) for cardiovascular mortality, and 0.49 (95% CI, 0.27-0.88) for ESRD. Similar findings were observed in other estimated GFR categories by the MDRD Study equation. Net reclassification improvement based on estimated GFR categories was significantly positive for all outcomes (range, 0.06-0.13; all P < .001). Net reclassification improvement was similarly positive in most subgroups defined by age (<65 years and ≥65 years), sex, race/ethnicity (white, Asian, and black), and presence or absence of diabetes and hypertension. The results in the high-risk and CKD cohorts were largely consistent with the general population cohorts. The CKD-EPI equation classified fewer individuals as having CKD and more accurately categorized the risk for mortality and ESRD than did the MDRD Study equation across a broad range of populations.

  20. Operationally Responsive Space (ORS): An Architecture and Enterprise Model for Adaptive Integration, Test and Logistics

    DTIC Science & Technology

    2008-06-01

    p, is the value that satisfies the following equation: xnx (1) c x qp x n − = ∑ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ =− 0 1 γ where γ = confidence level...zero failures is therefore, using the above equation, where γ = 0.5, c =0: (2) xnx x qp x n − = ∑ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ =− 0 0 5.01 which reduces

  1. Modeling animal movements using stochastic differential equations

    Treesearch

    Haiganoush K. Preisler; Alan A. Ager; Bruce K. Johnson; John G. Kie

    2004-01-01

    We describe the use of bivariate stochastic differential equations (SDE) for modeling movements of 216 radiocollared female Rocky Mountain elk at the Starkey Experimental Forest and Range in northeastern Oregon. Spatially and temporally explicit vector fields were estimated using approximating difference equations and nonparametric regression techniques. Estimated...

  2. Maneuver Estimation Model for Geostationary Orbit Determination

    DTIC Science & Technology

    2006-06-01

    create a more robust model which would reduce the amount of data needed to make accurate maneuver estimations. The Clohessy - Wiltshire equations were...Applications to Geostationary Satellites...........................................7 2.3.2 Clohessy - Wiltshire Equations...15 3.1.1 Application of Clohessy - Wiltshire Equations ................................15 3.1.2

  3. Estimating equations estimates of trends

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1994-01-01

    The North American Breeding Bird Survey monitors changes in bird populations through time using annual counts at fixed survey sites. The usual method of estimating trends has been to use the logarithm of the counts in a regression analysis. It is contended that this procedure is reasonably satisfactory for more abundant species, but produces biased estimates for less abundant species. An alternative estimation procedure based on estimating equations is presented.

  4. Body mass and stature estimation based on the first metatarsal in humans.

    PubMed

    De Groote, Isabelle; Humphrey, Louise T

    2011-04-01

    Archaeological assemblages often lack the complete long bones needed to estimate stature and body mass. The most accurate estimates of body mass and stature are produced using femoral head diameter and femur length. Foot bones including the first metatarsal preserve relatively well in a range of archaeological contexts. In this article we present regression equations using the first metatarsal to estimate femoral head diameter, femoral length, and body mass in a diverse human sample. The skeletal sample comprised 87 individuals (Andamanese, Australasians, Africans, Native Americans, and British). Results show that all first metatarsal measurements correlate moderately to highly (r = 0.62-0.91) with femoral head diameter and length. The proximal articular dorsoplantar diameter is the best single measurement to predict both femoral dimensions. Percent standard errors of the estimate are below 5%. Equations using two metatarsal measurements show a small increase in accuracy. Direct estimations of body mass (calculated from measured femoral head diameter using previously published equations) have an error of just over 7%. No direct stature estimation equations were derived due to the varied linear body proportions represented in the sample. The equations were tested on a sample of 35 individuals from Christ Church Spitalfields. Percentage differences in estimated and measured femoral head diameter and length were less than 1%. This study demonstrates that it is feasible to use the first metatarsal in the estimation of body mass and stature. The equations presented here are particularly useful for assemblages where the long bones are either missing or fragmented, and enable estimation of these fundamental population parameters in poorly preserved assemblages. Copyright © 2011 Wiley-Liss, Inc.

  5. Kato Smoothing and Strichartz Estimates for Wave Equations with Magnetic Potentials

    NASA Astrophysics Data System (ADS)

    D'Ancona, Piero

    2015-04-01

    Let H be a selfadjoint operator and A a closed operator on a Hilbert space . If A is H-(super)smooth in the sense of Kato-Yajima, we prove that is -(super)smooth. This allows us to include wave and Klein-Gordon equations in the abstract theory at the same level of generality as Schrödinger equations. We give a few applications and in particular, based on the resolvent estimates of Erdogan, Goldberg and Schlag (Forum Mathematicum 21:687-722, 2009), we prove Strichartz estimates for wave equations perturbed with large magnetic potentials on , n ≥ 3.

  6. Developing a generalized allometric equation for aboveground biomass estimation

    NASA Astrophysics Data System (ADS)

    Xu, Q.; Balamuta, J. J.; Greenberg, J. A.; Li, B.; Man, A.; Xu, Z.

    2015-12-01

    A key potential uncertainty in estimating carbon stocks across multiple scales stems from the use of empirically calibrated allometric equations, which estimate aboveground biomass (AGB) from plant characteristics such as diameter at breast height (DBH) and/or height (H). The equations themselves contain significant and, at times, poorly characterized errors. Species-specific equations may be missing. Plant responses to their local biophysical environment may lead to spatially varying allometric relationships. The structural predictor may be difficult or impossible to measure accurately, particularly when derived from remote sensing data. All of these issues may lead to significant and spatially varying uncertainties in the estimation of AGB that are unexplored in the literature. We sought to quantify the errors in predicting AGB at the tree and plot level for vegetation plots in California. To accomplish this, we derived a generalized allometric equation (GAE) which we used to model the AGB on a full set of tree information such as DBH, H, taxonomy, and biophysical environment. The GAE was derived using published allometric equations in the GlobAllomeTree database. The equations were sparse in details about the error since authors provide the coefficient of determination (R2) and the sample size. A more realistic simulation of tree AGB should also contain the noise that was not captured by the allometric equation. We derived an empirically corrected variance estimate for the amount of noise to represent the errors in the real biomass. Also, we accounted for the hierarchical relationship between different species by treating each taxonomic level as a covariate nested within a higher taxonomic level (e.g. species < genus). This approach provides estimation under incomplete tree information (e.g. missing species) or blurred information (e.g. conjecture of species), plus the biophysical environment. The GAE allowed us to quantify contribution of each different covariate in estimating the AGB of trees. Lastly, we applied the GAE to an existing vegetation plot database - Forest Inventory and Analysis database - to derive per-tree and per-plot AGB estimations, their errors, and how much the error could be contributed to the original equations, the plant's taxonomy, and their biophysical environment.

  7. Reverse bifurcation and fractal of the compound logistic map

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liang, Qingyong

    2008-07-01

    The nature of the fixed points of the compound logistic map is researched and the boundary equation of the first bifurcation of the map in the parameter space is given out. Using the quantitative criterion and rule of chaotic system, the paper reveal the general features of the compound logistic map transforming from regularity to chaos, the following conclusions are shown: (1) chaotic patterns of the map may emerge out of double-periodic bifurcation and (2) the chaotic crisis phenomena and the reverse bifurcation are found. At the same time, we analyze the orbit of critical point of the compound logistic map and put forward the definition of Mandelbrot-Julia set of compound logistic map. We generalize the Welstead and Cromer's periodic scanning technology and using this technology construct a series of Mandelbrot-Julia sets of compound logistic map. We investigate the symmetry of Mandelbrot-Julia set and study the topological inflexibility of distributing of period region in the Mandelbrot set, and finds that Mandelbrot set contain abundant information of structure of Julia sets by founding the whole portray of Julia sets based on Mandelbrot set qualitatively.

  8. Careful with Those Priors: A Note on Bayesian Estimation in Two-Parameter Logistic Item Response Theory Models

    ERIC Educational Resources Information Center

    Marcoulides, Katerina M.

    2018-01-01

    This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…

  9. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  10. Ability Estimation and Item Calibration Using the One and Three Parameter Logistic Models: A Comparative Study. Research Report 77-1.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…

  11. Estimation of Logistic Regression Models in Small Samples. A Simulation Study Using a Weakly Informative Default Prior Distribution

    ERIC Educational Resources Information Center

    Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel

    2012-01-01

    In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…

  12. 77 FR 53180 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... logistical and program support. (iv) Military Department: Air Force (CCZ, Amd 7). (v) Prior Related Cases, if.... Government and contractor technical and logistics support services; and other related elements of logistical and program support. The estimated cost is $850 million. This proposed sale will contribute to the...

  13. Logistics Company Carrier Partner 2.0.15 Tool: Technical Documentation 2015 Data Year - United States Version

    EPA Pesticide Factsheets

    This SmartWay Logistics 2.0.15 Tool is intended to help logistics companies estimate and assess their emission performance levels as well as their total emissions associated with goods movement in the U.S. freight rail, barge, air and t

  14. Complicated asymptotic behavior of solutions for porous medium equation in unbounded space

    NASA Astrophysics Data System (ADS)

    Wang, Liangwei; Yin, Jingxue; Zhou, Yong

    2018-05-01

    In this paper, we find that the unbounded spaces Yσ (RN) (0 < σ <2/m-1 ) can provide the work spaces where complicated asymptotic behavior appears in the solutions of the Cauchy problem of the porous medium equation. To overcome the difficulties caused by the nonlinearity of the equation and the unbounded solutions, we establish the propagation estimates, the growth estimates and the weighted L1-L∞ estimates for the solutions.

  15. A Note on Structural Equation Modeling Estimates of Reliability

    ERIC Educational Resources Information Center

    Yang, Yanyun; Green, Samuel B.

    2010-01-01

    Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…

  16. A Polychoric Instrumental Variable (PIV) Estimator for Structural Equation Models with Categorical Variables

    ERIC Educational Resources Information Center

    Bollen, Kenneth A.; Maydeu-Olivares, Albert

    2007-01-01

    This paper presents a new polychoric instrumental variable (PIV) estimator to use in structural equation models (SEMs) with categorical observed variables. The PIV estimator is a generalization of Bollen's (Psychometrika 61:109-121, 1996) 2SLS/IV estimator for continuous variables to categorical endogenous variables. We derive the PIV estimator…

  17. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  18. The Prevalence and Incidence of Epiretinal Membranes in Eyes With Inactive Extramacular CMV Retinitis

    PubMed Central

    Kozak, Igor; Vaidya, Vijay; Van Natta, Mark L.; Pak, Jeong W.; May, K. Patrick; Thorne, Jennifer E.

    2014-01-01

    Purpose. To determine the prevalence and incidence of epiretinal membranes (ERM) in eyes with inactive extramacular cytomegalovirus (CMV) retinitis in patients with acquired immune deficiency syndrome (AIDS). Methods. A case–control report from a longitudinal multicenter observational study by the Studies of the Ocular Complications of AIDS (SOCA) Research Group. A total of 357 eyes of 270 patients with inactive CMV retinitis and 1084 eyes of 552 patients with no ocular opportunistic infection (OOI) were studied. Stereoscopic views of the posterior pole from fundus photographs were assessed at baseline and year 5 visits for the presence of macular ERM. Generalized estimating equations (GEE) logistic regression was used to compare the prevalence and 5-year incidence of ERM in eyes with and without CMV retinitis at enrollment. Crude and adjusted logistic regression was performed adjusting for possible confounders. Main outcome measures included the prevalence, incidence, estimated prevalence, and incidence odds ratios. Results. The prevalence of ERM at enrollment was 14.8% (53/357) in eyes with CMV retinitis versus 1.8% (19/1084) in eyes with no OOI. The incidence of ERM at 5 years was 18.6% (16/86) in eyes with CMV retinitis versus 2.4% (6/253) in eyes with no OOI. The crude odds ratio (OR) (95% confidence interval, CI) for prevalence was 9.8 (5.5–17.5) (P < 0.01). The crude OR (95% CI) for incidence was 9.4 (3.2–27.9) (P < 0.01). Conclusions. A history of extramacular CMV retinitis is associated with increased prevalence and incidence of ERM formation compared to what is seen in eyes without ocular opportunistic infections in AIDS patients. PMID:24925880

  19. The prevalence and incidence of epiretinal membranes in eyes with inactive extramacular CMV retinitis.

    PubMed

    Kozak, Igor; Vaidya, Vijay; Van Natta, Mark L; Pak, Jeong W; May, K Patrick; Thorne, Jennifer E

    2014-06-12

    To determine the prevalence and incidence of epiretinal membranes (ERM) in eyes with inactive extramacular cytomegalovirus (CMV) retinitis in patients with acquired immune deficiency syndrome (AIDS). A case-control report from a longitudinal multicenter observational study by the Studies of the Ocular Complications of AIDS (SOCA) Research Group. A total of 357 eyes of 270 patients with inactive CMV retinitis and 1084 eyes of 552 patients with no ocular opportunistic infection (OOI) were studied. Stereoscopic views of the posterior pole from fundus photographs were assessed at baseline and year 5 visits for the presence of macular ERM. Generalized estimating equations (GEE) logistic regression was used to compare the prevalence and 5-year incidence of ERM in eyes with and without CMV retinitis at enrollment. Crude and adjusted logistic regression was performed adjusting for possible confounders. Main outcome measures included the prevalence, incidence, estimated prevalence, and incidence odds ratios. The prevalence of ERM at enrollment was 14.8% (53/357) in eyes with CMV retinitis versus 1.8% (19/1084) in eyes with no OOI. The incidence of ERM at 5 years was 18.6% (16/86) in eyes with CMV retinitis versus 2.4% (6/253) in eyes with no OOI. The crude odds ratio (OR) (95% confidence interval, CI) for prevalence was 9.8 (5.5-17.5) (P < 0.01). The crude OR (95% CI) for incidence was 9.4 (3.2-27.9) (P < 0.01). A history of extramacular CMV retinitis is associated with increased prevalence and incidence of ERM formation compared to what is seen in eyes without ocular opportunistic infections in AIDS patients. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  20. Comparing methods of analysing datasets with small clusters: case studies using four paediatric datasets.

    PubMed

    Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil

    2009-07-01

    Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.

  1. A secure distributed logistic regression protocol for the detection of rare adverse drug events

    PubMed Central

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-01-01

    Background There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. Objective To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. Methods We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. Results The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. Conclusion The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models. PMID:22871397

  2. A secure distributed logistic regression protocol for the detection of rare adverse drug events.

    PubMed

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-05-01

    There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.

  3. Maine StreamStats: a water-resources web application

    USGS Publications Warehouse

    Lombard, Pamela J.

    2015-01-01

    Reports referenced in this fact sheet present the regression equations used to estimate the flow statistics, describe the errors associated with the estimates, and describe the methods used to develop the equations and to measure the basin characteristics used in the equations. Limitations of the methods are also described in the reports; for example, all of the equations are appropriate only for ungaged, unregulated, rural streams in Maine.

  4. Estimation of traveltime and longitudinal dispersion in streams in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.; Messinger, Terence

    2013-01-01

    Traveltime and dispersion data are important for understanding and responding to spills of contaminants in waterways. The U.S. Geological Survey (USGS), in cooperation with West Virginia Bureau for Public Health, Office of Environmental Health Services, compiled and evaluated traveltime and longitudinal dispersion data representative of many West Virginia waterways. Traveltime and dispersion data were not available for streams in the northwestern part of the State. Compiled data were compared with estimates determined from national equations previously published by the USGS. The evaluation summarized procedures and examples for estimating traveltime and dispersion on streams in West Virginia. National equations developed by the USGS can be used to predict traveltime and dispersion for streams located in West Virginia, but the predictions will be less accurate than those made with graphical interpolation between measurements. National equations for peak concentration, velocity of the peak concentration, and traveltime of the leading edge had root mean square errors (RMSE) of 0.426 log units (127 percent), 0.505 feet per second (ft/s), and 3.78 hours (h). West Virginia data fit the national equations for peak concentration, velocity of the peak concentration, and traveltime of the leading edge with RMSE of 0.139 log units (38 percent), 0.630 ft/s, and 3.38 h, respectively. The national equation for maximum possible velocity of the peak concentration exceeded 99 percent and 100 percent of observed values from the national data set and West Virginia-only data set, respectively. No RMSE was reported for time of passage of a dye cloud, as estimated using the national equation; however, the estimates made using the national equations had a root mean square error of 3.82 h when compared to data gathered for this study. Traveltime and dispersion estimates can be made from the plots of traveltime as a function of streamflow and location for streams with plots available, but estimates can be made using the national equations for streams without plots. The estimating procedures are not valid for regulated stream reaches that were not individually studied or streamflows outside the limits studied. Rapidly changing streamflow and inadequate mixing across the stream channel affect traveltime and dispersion, and reduce the accuracy of estimates. Increases in streamflow typically result in decreases in the peak concentration and traveltime of the peak concentration. Decreases in streamflow typically result in increases in the peak concentration and traveltime of the peak concentration. Traveltimes will likely be less than those determined using the estimating equations and procedures if the spill is in the center of the stream, and traveltimes will likely be greater than those determined using the estimating equations and procedures if the spill is near the streambank.

  5. Estimation of GFR in South Asians: A Study From the General Population in Pakistan

    PubMed Central

    Jessani, Saleem; Levey, Andrew S.; Bux, Rasool; Inker, Lesley A.; Islam, Muhammad; Chaturvedi, Nish; Mariat, Christophe; Schmid, Christopher H.; Jafar, Tazeen H.

    2015-01-01

    Background South Asians are at high risk for chronic kidney disease. However, unlike those in the United States and United Kingdom, laboratories in South Asian countries do not routinely report estimated glomerular filtration rate (eGFR) when serum creatinine is measured. The objectives of the study were to: (1) evaluate the performance of existing GFR estimating equations in South Asians, and (2) modify the existing equations or develop a new equation for use in this population. Study Design Cross-sectional population-based study. Setting & Participants 581 participants 40 years or older were enrolled from 10 randomly selected communities and renal clinics in Karachi. Predictors eGFR, age, sex, serum creatinine level. Outcomes Bias (the median difference between measured GFR [mGFR] and eGFR), precision (the IQR of the difference), accuracy (P30; percentage of participants with eGFR within 30% of mGFR), and the root mean squared error reported as cross-validated estimates along with bootstrapped 95% CIs based on 1,000 replications. Results The CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) creatinine equation performed better than the MDRD (Modification of Diet in Renal Disease) Study equation in terms of greater accuracy at P30 (76.1% [95% CI, 72.7%–79.5%] vs 68.0% [95% CI, 64.3%–71.7%]; P <0.001) and improved precision (IQR, 22.6 [95% CI, 19.9–25.3] vs 28.6 [95% CI, 25.8–31.5] mL/min/1.73 m2; P < 0.001). However, both equations overestimated mGFR. Applying modification factors for slope and intercept to the CKD-EPI equation to create a CKD-EPI Pakistan equation (such that eGFRCKD-EPI(PK) = 0.686 × eGFRCKD-EPI1.059) in order to eliminate bias improved accuracy (P30, 81.6% [95% CI, 78.4%–84.8%]; P < 0.001) comparably to new estimating equations developed using creatinine level and additional variables. Limitations Lack of external validation data set and few participants with low GFR. Conclusions The CKD-EPI creatinine equation is more accurate and precise than the MDRD Study equation in estimating GFR in a South Asian population in Karachi. The CKD-EPI Pakistan equation further improves the performance of the CKD-EPI equation in South Asians and could be used for eGFR reporting. PMID:24074822

  6. eHealth Literacy: Predictors in a Population With Moderate-to-High Cardiovascular Risk

    PubMed Central

    Richtering, Sarah S; Hyun, Karice; Neubeck, Lis; Coorey, Genevieve; Chalmers, John; Usherwood, Tim; Peiris, David; Chow, Clara K

    2017-01-01

    Background Electronic health (eHealth) literacy is a growing area of research parallel to the ongoing development of eHealth interventions. There is, however, little and conflicting information regarding the factors that influence eHealth literacy, notably in chronic disease. We are similarly ill-informed about the relationship between eHealth and health literacy, 2 related yet distinct health-related literacies. Objective The aim of our study was to investigate the demographic, socioeconomic, technology use, and health literacy predictors of eHealth literacy in a population with moderate-to-high cardiovascular risk. Methods Demographic and socioeconomic data were collected from 453 participants of the CONNECT (Consumer Navigation of Electronic Cardiovascular Tools) study, which included age, gender, education, income, cardiovascular-related polypharmacy, private health care, main electronic device use, and time spent on the Internet. Participants also completed an eHealth Literacy Scale (eHEALS) and a Health Literacy Questionnaire (HLQ). Univariate analyses were performed to compare patient demographic and socioeconomic characteristics between the low (eHEALS<26) and high (eHEALS≥26) eHealth literacy groups. To then determine the predictors of low eHealth literacy, multiple-adjusted generalized estimating equation logistic regression model was used. This technique was also used to examine the correlation between eHealth literacy and health literacy for 4 predefined literacy themes: navigating resources, skills to use resources, usefulness for oneself, and critical evaluation. Results The univariate analysis showed that patients with lower eHealth literacy were older (68 years vs 66 years, P=.01), had lower level of education (P=.007), and spent less time on the Internet (P<.001). However, multiple-adjusted generalized estimating equation logistic regression model demonstrated that only the time spent on the Internet (P=.01) was associated with the level of eHealth literacy. Regarding the comparison between the eHEALS items and HLQ scales, a positive linear relationship was found for the themes “usefulness for oneself” (P=.049) and “critical evaluation” (P=.01). Conclusions This study shows the importance of evaluating patients’ familiarity with the Internet as reflected, in part, by the time spent on the Internet. It also shows the importance of specifically assessing eHealth literacy in conjunction with a health literacy assessment in order to assess patients’ navigational knowledge and skills using the Internet, specific to the use of eHealth applications. PMID:28130203

  7. Hospital-acquired pneumonia is an independent predictor of poor global outcome in severe traumatic brain injury up to 5 years after discharge.

    PubMed

    Kesinger, Matthew Ryan; Kumar, Raj G; Wagner, Amy K; Puyana, Juan Carlos; Peitzman, Andrew P; Billiar, Timothy R; Sperry, Jason L

    2015-02-01

    Long-term outcomes following traumatic brain injury (TBI) correlate with initial head injury severity and other acute factors. Hospital-acquired pneumonia (HAP) is a common complication in TBI. Limited information exists regarding the significance of infectious complications on long-term outcomes after TBI. We sought to characterize risks associated with HAP on outcomes 5 years after TBI. This study involved data from the merger of an institutional trauma registry and the Traumatic Brain Injury Model Systems outcome data. Individuals with severe head injuries (Abbreviated Injury Scale [AIS] score ≥ 4) who survived to rehabilitation were analyzed. Primary outcome was Glasgow Outcome Scale-Extended (GOSE) at 1, 2, and 5 years. GOSE was dichotomized into low (GOSE score < 6) and high (GOSE score ≥ 6). Logistic regression was used to determine adjusted odds of low GOSE score associated with HAP after controlling for age, sex, head and overall injury severity, cranial surgery, Glasgow Coma Scale (GCS) score, ventilation days, and other important confounders. A general estimating equation model was used to analyze all outcome observations simultaneously while controlling for within-patient correlation. A total of 141 individuals met inclusion criteria, with a 30% incidence of HAP. Individuals with and without HAP had similar demographic profiles, presenting vitals, head injury severity, and prevalence of cranial surgery. Individuals with HAP had lower presenting GCS score. Logistic regression demonstrated that HAP was independently associated with low GOSE scores at follow-up (1 year: odds ratio [OR], 6.39; 95% confidence interval [CI], 1.76-23.14; p = 0.005) (2 years: OR, 7.30; 95% CI, 1.87-27.89; p = 0.004) (5-years: OR, 6.89; 95% CI, 1.42-33.39; p = 0.017). Stratifying by GCS score of 8 or lower and early intubation, HAP remained a significant independent predictor of low GOSE score in all strata. In the general estimating equation model, HAP continued to be an independent predictor of low GOSE score (OR, 4.59; 95% CI, 1.82-11.60; p = 0.001). HAP is independently associated with poor outcomes in severe TBI extending 5 years after injury. This suggests that precautions should be taken to reduce the risk of HAP in individuals with severe TBI. Prognostic study, level III.

  8. Evaluation and interpretation of Thematic Mapper ratios in equations for estimating corn growth parameters

    NASA Technical Reports Server (NTRS)

    Dardner, B. R.; Blad, B. L.; Thompson, D. R.; Henderson, K. E.

    1985-01-01

    Reflectance and agronomic Thematic Mapper (TM) data were analyzed to determine possible data transformations for evaluating several plant parameters of corn. Three transformation forms were used: the ratio of two TM bands, logarithms of two-band ratios, and normalized differences of two bands. Normalized differences and logarithms of two-band ratios responsed similarly in the equations for estimating the plant growth parameters evaluated in this study. Two-term equations were required to obtain the maximum predictability of percent ground cover, canopy moisture content, and total wet phytomass. Standard error of estimate values were 15-26 percent lower for two-term estimates of these parameters than for one-term estimates. The terms log(TM4/TM2) and (TM4/TM5) produced the maximum predictability for leaf area and dry green leaf weight, respectively. The middle infrared bands TM5 and TM7 are essential for maximizing predictability for all measured plant parameters except leaf area index. The estimating models were evaluated over bare soil to discriminate between equations which are statistically similar. Qualitative interpretations of the resulting prediction equations are consistent with general agronomic and remote sensing theory.

  9. Estimation of height and body mass index from demi-span in elderly individuals.

    PubMed

    Weinbrenner, Tanja; Vioque, Jesús; Barber, Xavier; Asensio, Laura

    2006-01-01

    Obtaining accurate height and, consequently, body mass index (BMI) measurements in elderly subjects can be difficult due to changes in posture and loss of height during ageing. Measurements of other body segments can be used as an alternative to estimate standing height, but population- and age-specific equations are necessary. Our objectives were to validate existing equations, to develop new simple equations to predict height in an elderly Spanish population and to assess the accuracy of the BMI calculated by estimated height from the new equations. We measured height and demi-span in a representative sample of 592 individuals, 271 men and 321 women, 65 years and older (mean +/- SD, 73.8 +/- 6.3 years). We suggested equations to predict height from demi-span by multiple regression analyses and performed an agreement analysis between measured and estimated indices. Height estimated from demi-span correlated significantly (p < 0.001) with measured height (men: r = 0.708, women: r = 0.625). The best prediction equations were as follows: men, height (in cm) = 77.821 + (1.132 x demi-span in cm) + (-0.215 x 5-year age category); women: height (in cm) = 88.854 + (0.899 x demi-span in cm) + (-0.692 x 5-year age category). No significant differences between the mean values of estimated and measured heights were found for men (-0.03 +/- 4.6 cm) or women (-0.02 +/- 4.1 cm). The BMI derived from measured height did not differ significantly from the BMI derived from estimated height either. Predicted height values from equations based on demi-span and age may be acceptable surrogates to derive accurate nutritional indices such as the BMI, particularly in elderly populations, where height may be difficult to measure accurately.

  10. Family-oriented cardiac risk estimator: a Java web-based applet.

    PubMed

    Crouch, Michael A; Jadhav, Ashwin

    2003-01-01

    We developed a Java applet that calculates four different estimates of a person's 10-year risk for heart attack: (1) Estimate based on Framingham equation (2) Framingham equation estimate modified by C-reactive protein (CRP) level (3) Framingham estimate modified by family history of heart disease in parents or siblings (4) Framingham estimate modified by both CRP and family heart disease history. This web-based, family-oriented cardiac risk estimator uniquely considers family history and CRP while estimating risk.

  11. Challenges in risk estimation using routinely collected clinical data: The example of estimating cervical cancer risks from electronic health-records.

    PubMed

    Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A

    2018-06-01

    Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Dynamical analysis of cigarette smoking model with a saturated incidence rate

    NASA Astrophysics Data System (ADS)

    Zeb, Anwar; Bano, Ayesha; Alzahrani, Ebraheem; Zaman, Gul

    2018-04-01

    In this paper, we consider a delayed smoking model in which the potential smokers are assumed to satisfy the logistic equation. We discuss the dynamical behavior of our proposed model in the form of Delayed Differential Equations (DDEs) and show conditions for asymptotic stability of the model in steady state. We also discuss the Hopf bifurcation analysis of considered model. Finally, we use the nonstandard finite difference (NSFD) scheme to show the results graphically with help of MATLAB.

  13. Non-smooth saddle-node bifurcations III: Strange attractors in continuous time

    NASA Astrophysics Data System (ADS)

    Fuhrmann, G.

    2016-08-01

    Non-smooth saddle-node bifurcations give rise to minimal sets of interesting geometry built of so-called strange non-chaotic attractors. We show that certain families of quasiperiodically driven logistic differential equations undergo a non-smooth bifurcation. By a previous result on the occurrence of non-smooth bifurcations in forced discrete time dynamical systems, this yields that within the class of families of quasiperiodically driven differential equations, non-smooth saddle-node bifurcations occur in a set with non-empty C2-interior.

  14. Methods for estimating streamflow at mountain fronts in southern New Mexico

    USGS Publications Warehouse

    Waltemeyer, S.D.

    1994-01-01

    The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.

  15. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    PubMed

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.

  16. Estimating and Interpreting Latent Variable Interactions: A Tutorial for Applying the Latent Moderated Structural Equations Method

    ERIC Educational Resources Information Center

    Maslowsky, Julie; Jager, Justin; Hemken, Douglas

    2015-01-01

    Latent variables are common in psychological research. Research questions involving the interaction of two variables are likewise quite common. Methods for estimating and interpreting interactions between latent variables within a structural equation modeling framework have recently become available. The latent moderated structural equations (LMS)…

  17. Comparison of constitutive flow resistance equations based on the Manning and Chezy equations applied to natural rivers

    USGS Publications Warehouse

    Bjerklie, David M.; Dingman, S. Lawrence; Bolster, Carl H.

    2005-01-01

    A set of conceptually derived in‐bank river discharge–estimating equations (models), based on the Manning and Chezy equations, are calibrated and validated using a database of 1037 discharge measurements in 103 rivers in the United States and New Zealand. The models are compared to a multiple regression model derived from the same data. The comparison demonstrates that in natural rivers, using an exponent on the slope variable of 0.33 rather than the traditional value of 0.5 reduces the variance associated with estimating flow resistance. Mean model uncertainty, assuming a constant value for the conductance coefficient, is less than 5% for a large number of estimates, and 67% of the estimates would be accurate within 50%. The models have potential application where site‐specific flow resistance information is not available and can be the basis for (1) a general approach to estimating discharge from remotely sensed hydraulic data, (2) comparison to slope‐area discharge estimates, and (3) large‐scale river modeling.

  18. Body composition in elderly people: effect of criterion estimates on predictive equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgartner, R.N.; Heymsfield, S.B.; Lichtman, S.

    1991-06-01

    The purposes of this study were to determine whether there are significant differences between two- and four-compartment model estimates of body composition, whether these differences are associated with aqueous and mineral fractions of the fat-free mass (FFM); and whether the differences are retained in equations for predicting body composition from anthropometry and bioelectric resistance. Body composition was estimated in 98 men and women aged 65-94 y by using a four-compartment model based on hydrodensitometry, {sup 3}H{sub 2}O dilution, and dual-photon absorptiometry. These estimates were significantly different from those obtained by using Siri's two-compartment model. The differences were associated significantly (Pmore » less than 0.0001) with variation in the aqueous fraction of FFM. Equations for predicting body composition from anthropometry and resistance, when calibrated against two-compartment model estimates, retained these systematic errors. Equations predicting body composition in elderly people should be calibrated against estimates from multicompartment models that consider variability in FFM composition.« less

  19. Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2008-01-01

    Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.

  20. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  1. Male-Female Wage Differentials in the United States.

    ERIC Educational Resources Information Center

    Kiker, B. F.; Crouch, Henry L.

    The primary objective of this paper is to describe a method of estimating female-male wage ratios. The estimating technique presented is two stage least squares (2SLS), in which equations are estimated for both men and women. After specifying and estimating the wage equations, the male-female wage differential is calculated that would remain if…

  2. Standard Error of Linear Observed-Score Equating for the NEAT Design with Nonnormally Distributed Data

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2012-01-01

    In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…

  3. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  4. Bayesian Estimation of the Logistic Positive Exponent IRT Model

    ERIC Educational Resources Information Center

    Bolfarine, Heleno; Bazan, Jorge Luis

    2010-01-01

    A Bayesian inference approach using Markov Chain Monte Carlo (MCMC) is developed for the logistic positive exponent (LPE) model proposed by Samejima and for a new skewed Logistic Item Response Theory (IRT) model, named Reflection LPE model. Both models lead to asymmetric item characteristic curves (ICC) and can be appropriate because a symmetric…

  5. Reliability of serum creatinine-based formulae estimating renal function in non-critically ill surgery patients: Focus on augmented renal clearance.

    PubMed

    Declercq, Peter; Gijsen, Matthias; Meijers, Björn; Schetz, Marie; Nijs, Stefaan; D'Hoore, André; Wauters, Joost; Spriet, Isabel

    2018-05-07

    Formulae estimating glomerular filtration rate (GFR) are frequently used to guide drug dosing. The objectives of this prospective single-center study were to evaluate agreement between these equations and measured creatinine clearance (CrCl) in non-critically ill surgery patients with normal kidney function and augmented renal clearance (ARC, CrCl ≥ 130 mL/min/1.73 m²), to determine predictors for disagreement, define a GFR estimator cut-off value identifying ARC and determine the ARC prevalence and duration in non-critically ill surgical patients. Hospitalized adult non-critically ill abdominal and trauma surgery patients were eligible for inclusion. Measured CrCl based on an 8-hour urinary collection (CrCl 8h ) was used as the primary method for determining kidney function. Agreement between equations and measured CrCl 8h was assessed in terms of precision, defined as a bias within ±10 mL/min/1.73 m². Predictors for disagreement were identified for the most precise estimator using an ordinal logistic regression model with negative bias, agreement and positive bias as outcome variables. A receiver operating characteristic (ROC) analysis was performed to identify an estimator cut-off predicting ARC, which was subsequently applied for the daily proportion of patients displaying ARC and ARC duration. During the study period (14/11/2013 - 13/05/2014), in 232 adult non-critically ill abdominal and trauma surgery patients, all estimators tend to underestimate CrCl 8h (mean bias ranging from 17 to 22 mL/min/1.73 m²), especially in patients displaying ARC (mean bias ranging from 44 to 56 mL/min/1.73 m²). eGFR CKD-EPI performed the best. Younger age and low ASA score independently predicted underestimation of CrCl 8h . Three different eGFR CKD-EPI cut-offs with decreasing sensitivity and increasing specificity (84, 95 and 112 mL/min/1.73 m²) identified, respectively, 65%, 44% and 14% patients displaying ARC. The median ARC duration was 4, 4 and 3 days, respectively. In surgical patients, eGFR frequently underestimates measured CrCl, especially in young patients with low ASA score. eGFR cut-offs predicting ARC were identified. © 2018 John Wiley & Sons Ltd.

  6. Comparative evaluation of technetium-99m-diethylenetriaminepentaacetic acid renal dynamic imaging versus the Modification of Diet in Renal Disease equation and the Chronic Kidney Disease Epidemiology Collaboration equation for the estimation of GFR.

    PubMed

    Huang, Qi; Chen, Yunshuang; Zhang, Min; Wang, Sihe; Zhang, Weiguang; Cai, Guangyan; Chen, Xiangmei; Sun, Xuefeng

    2018-04-01

    We compared the performance of technetium-99m-diethylenetriaminepentaacetic acid ( 99m Tc-DTPA) renal dynamic imaging (RDI), the MDRD equation, and the CKD EPI equation to estimate glomerular filtration rate (GFR). A total of 551 subjects, including CKD patients and healthy individuals, were enrolled in this study. Dual plasma sample clearance method of 99m Tc-DTPA was used as the true value for GFR (tGFR). RDI and the MDRD and CKD EPI equations for estimating GFR were compared and evaluated. Data indicate that RDI and the MDRD equation underestimated GFR and CKD EPI overestimated GFR. RDI was associated with significantly higher bias than the MDRD and CKD EPI equations. The regression coefficient, diagnostic precision, and consistency of RDI were significantly lower than either equation. RDI and the MDRD equation underestimated GFR to a greater degree in subjects with tGFR ≥ 90 ml/min/1.73 m 2 compared with the results obtained from all subjects. In the tGFR60-89 ml/min/1.73 m 2 group, the precision of RDI was significantly lower than that of both equations. In the tGFR30-59 ml/min/1.73 m 2 group, RDI had the least bias, the most precision, and significantly higher accuracy compared with either equation. In tGFR < 30 ml/min/1.73 m 2 , the three methods had similar performance and were not significantly different. RDI significantly underestimates GFR and performs no better than MDRD and CKD EPI equations for GFR estimation; thus, it should not be recommended as a reference standard against which other GFR measurement methods are assessed. However, RDI better estimates GFR than either equation for individuals in the tGFR30-59 ml/min/1.73 m 2 group and thus may be helpful to distinguish stage 3a and 3b CKD.

  7. Improving estimates of streamflow characteristics by using Landsat-1 imagery

    USGS Publications Warehouse

    Hollyday, Este F.

    1976-01-01

    Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.

  8. Physical Activity, Physical Exertion, and Miscarriage Risk in Women Textile Workers in Shanghai, China

    PubMed Central

    Wong, EY; Ray, R; Gao, DL; Wernli, KJ; Li, W; Fitzgibbons, ED; Camp, JE; Heagerty, PJ; De Roos, AJ; Holt, VL; Thomas, DB; Checkoway, H

    2010-01-01

    Background Strenuous occupational physical activity and physical demands may be risk factors for adverse reproductive outcomes. Methods A retrospective study in the Shanghai, China textile industry study collected women’s self-reported reproductive history. Occupational physical activity assessment linked complete work history data to an industry-specific job-exposure matrix. Odds ratios (OR) and 95% confidence intervals (CI) were estimated by multivariate logistic regression for the first pregnancy outcome and utilized generalized estimating equations to consider all pregnancies per woman. Results Compared with women employed in sedentary jobs, a reduced risk of miscarriage was found for women working in jobs with either light (OR 0.18, 95%CI: 0.07, 0.50) or medium (OR 0.24, 95%CI: 0.08, 0.66) physical activity during the first pregnancy and over all pregnancies (light OR 0.32, 95%CI: 0.17, 0.61; medium OR 0.43, 95%CI: 0.23, 0.80). Frequent crouching was associated with elevated risk (OR 1.82, 95%CI: 1.14, 2.93; all pregnancies per woman). Conclusions Light/medium occupational physical activity may have reduced miscarriage risk, while specific occupational characteristics such as crouching may have increased risk in this cohort. PMID:20340112

  9. The relationship of DSM-IV pathological gambling to compulsive buying and other possible spectrum disorders: results from the Iowa PG family study.

    PubMed

    Black, Donald W; Coryell, William; Crowe, Raymond; Shaw, Martha; McCormick, Brett; Allen, Jeff

    2015-03-30

    This study investigates the possible relationship between pathological gambling (PG) and potential spectrum disorders including the DSM-IV impulse control disorders (intermittent explosive disorder, kleptomania, pyromania, trichotillomania) and several non-DSM disorders (compulsive buying disorder, compulsive sexual behavior, Internet addiction). PG probands, controls, and their first-degree relatives were assessed with instruments of known reliability. Detailed family history information was collected on relatives who were deceased or unavailable. Best estimate diagnoses were assigned blind to family status. The results were analyzed using logistic regression by the method of generalized estimating equations. The sample included 95 probands with PG, 91 controls, and 1075 first-degree relatives (537 PG, 538 controls). Compulsive buying disorder and having "any spectrum disorder" were more frequent in the PG probands and their first-degree relatives vs. controls and their relatives. Spectrum disorders were significantly more prevalent among PG relatives compared to control relatives (adjusted OR=8.37), though much of this difference was attributable to the contribution from compulsive buying disorder. We conclude that compulsive buying disorder is likely part of familial PG spectrum. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. A mathematical function for the description of nutrient-response curve

    PubMed Central

    Ahmadi, Hamed

    2017-01-01

    Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271

  11. Estimation of selected flow and water-quality characteristics of Alaskan streams

    USGS Publications Warehouse

    Parks, Bruce; Madison, R.J.

    1985-01-01

    Although hydrologic data are either sparse or nonexistent for large areas of Alaska, the drainage area, area of lakes, glacier and forest cover, and average precipitation in a hydrologic basin of interest can be measured or estimated from existing maps. Application of multiple linear regression techniques indicates that statistically significant correlations exist between properties of basins determined from maps and measured streamflow characteristics. This suggests that corresponding characteristics of ungaged basins can be estimated. Streamflow frequency characteristics can be estimated from regional equations developed for southeast, south-central and Yukon regions. Statewide or modified regional equations must be used, however, for the southwest, northwest, and Arctic Slope regions where there is a paucity of data. Equations developed from basin characteristics are given to estimate suspended-sediment values for glacial streams and, with less reliability, for nonglacial streams. Equations developed from available specific conductance data are given to estimate concentrations of major dissolved inorganic constituents. Suggestions are made for expanding the existing data base and thus improving the ability to estimate hydrologic characteristics for Alaskan streams. (USGS)

  12. Mixture models for undiagnosed prevalent disease and interval-censored incident disease: applications to a cohort assembled from electronic health records.

    PubMed

    Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A

    2017-09-30

    For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  13. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease12

    PubMed Central

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl AM; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-01-01

    Background: Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. Objective: We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. Design: We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3–4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. Results: The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min–1 · 1.73 m–2. The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: −8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. Conclusion: These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this study come from a clinical trial that was registered at clinicaltrials.gov as NCT00785629. PMID:27357090

  14. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease.

    PubMed

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl Am; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-08-01

    Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3-4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min(-1) · 1.73 m(-2) The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: -8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this study come from a clinical trial that was registered at clinicaltrials.gov as NCT00785629. © 2016 American Society for Nutrition.

  15. Probabilistic estimates of number of undiscovered deposits and their total tonnages in permissive tracts using deposit densities

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    2011-01-01

    Empirical evidence indicates that processes affecting number and quantity of resources in geologic settings are very general across deposit types. Sizes of permissive tracts that geologically could contain the deposits are excellent predictors of numbers of deposits. In addition, total ore tonnage of mineral deposits of a particular type in a tract is proportional to the type’s median tonnage in a tract. Regressions using size of permissive tracts and median tonnage allow estimation of number of deposits and of total tonnage of mineralization. These powerful estimators, based on 10 different deposit types from 109 permissive worldwide control tracts, generalize across deposit types. Estimates of number of deposits and of total tonnage of mineral deposits are made by regressing permissive area, and mean (in logs) tons in deposits of the type, against number of deposits and total tonnage of deposits in the tract for the 50th percentile estimates. The regression equations (R2 = 0.91 and 0.95) can be used for all deposit types just by inserting logarithmic values of permissive area in square kilometers, and mean tons in deposits in millions of metric tons. The regression equations provide estimates at the 50th percentile, and other equations are provided for 90% confidence limits for lower estimates and 10% confidence limits for upper estimates of number of deposits and total tonnage. Equations for these percentile estimates along with expected value estimates are presented here along with comparisons with independent expert estimates. Also provided are the equations for correcting for the known well-explored deposits in a tract. These deposit-density models require internally consistent grade and tonnage models and delineations for arriving at unbiased estimates.

  16. Parameter estimation problems for distributed systems using a multigrid method

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Dutt, Pravir

    1990-01-01

    The problem of estimating spatially varying coefficients of partial differential equations is considered from observation of the solution and of the right hand side of the equation. It is assumed that the observations are distributed in the domain and that enough observations are given. A method of discretization and an efficient multigrid method for solving the resulting discrete systems are described. Numerical results are presented for estimation of coefficients in an elliptic and a parabolic partial differential equation.

  17. Iohexol clearance is superior to creatinine-based renal function estimating equations in detecting short-term renal function decline in chronic heart failure.

    PubMed

    Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D; Macdougall, Iain C; Ponikowski, Piotr; Lainscak, Mitja

    2015-12-01

    To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P=0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P=0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number: NCT01829880.

  18. June and August median streamflows estimated for ungaged streams in southern Maine

    USGS Publications Warehouse

    Lombard, Pamela J.

    2010-01-01

    Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.

  19. Relationships between indicators of acid-base chemistry and fish assemblages in streams of the Great Smoky Mountains National Park

    USGS Publications Warehouse

    Baldigo, Barry P.; Kulp, Matt A.; Schwartz, John S.

    2018-01-01

    The acidity of many streams in the Great Smoky Mountains National Park (GRSM) has increased significantly since pre-industrial (∼1850) times due to the effects of highly acidic atmospheric deposition in poorly buffered watersheds. Extensive stream-monitoring programs since 1993 have shown that fish and macroinvertebrate assemblages have been adversely affected in many streams across the GRSM. Matching chemistry and fishery information collected from 389 surveys performed at 52 stream sites over a 22-year period were assessed using logistic regression analysis to help inform the U.S. Environmental Protection Agency’s assessment of the environmental impacts of emissions of oxides of nitrogen (NOx) and sulfur (SOx). Numerous logistic equations and associated curves were derived that defined the relations between acid neutralizing capacity (ANC) or pH and different levels of community richness, density, and biomass; and density and biomass of brook trout, rainbow trout, and small prey (minnow) populations in streams of the GRSM. The equations and curves describe the status of fish assemblages in the GRSM under contemporary emission levels and deposition loads of nitrogen (N) and sulfur (S) and provide a means to estimate how newly proposed (and various alternative) target deposition loads, which strongly influence stream ANC, might affect key ecological indicators. Several examples using ANC, community richness, and brook trout density are presented to illustrate the steps needed to predict how future changes in stream chemistry (resulting from different target deposition loads of N and S) will affect the probabilities of observing specific levels of selected biological indicators in GRSM streams. The implications of this study to the regulation of NOx and SOx emissions, water quality, and fisheries management in streams of the GRSM are discussed, but also qualified by the fact that specific examples provided need to be further explored before recommendations concerning their use as ecological indicators could be proposed.

  20. Conditions for the return and simulation of the recovery of burrowing mayflies in western Lake Erie

    USGS Publications Warehouse

    Kolar, Cynthia S.; Hudson, Patrick L.; Savino, Jacqueline F.

    1997-01-01

    In the 1950s, burrowing mayflies, Hexagenia spp. (H. Limbata and H. Rigida), were virtually eliminated from the western basin of Lake Erie (a 3300 kmA? area) because of eutrophication and pollution. We develop and present a deterministic model for the recolonization of the western basin by Hexagenia to pre-1953 densities. The model was based on the logistic equation describing the population growth of Hexagenia and a presumed competitor, Chironomus (dipteran larvae). Other parameters (immigration, low oxygen, toxic sediments, competition with Chironomus, and fish predation) were then individually added to the logistic model to determine their effect at different growth rates. The logistic model alone predicts 10-41 yr for Hexagenia to recolonize western Lake Erie. Immigration reduced the recolonization time by 2-17 yr. One low-oxygen event during the first 20 yr increased recovery time by 5-17 yr. Contaminated sediments added 5-11 yr to the recolonization time. Competition with Chironomus added 8-19 yr to recovery. Fish predators added 4-47 yr to the time required for recolonization. The full model predicted 48-81 yr for Hexagenia to reach a carrying capacity of approximately 350 nymphs/mA?, or not until around the year 2038 if the model is started in 1990. The model was verified by changing model parameters to those present in 1970, beginning the model in 1970 and running it through 1990. Predicted densities overlapped almost completely with actual estimated densities of Hexagenia nymphs present in the western basin in Lake Erie in 1990. The model suggests that recovery of large aquatic ecosystems may lag substantially behind remediation efforts.

  1. Techniques for estimating peak-streamflow frequency for unregulated streams and streams regulated by small floodwater retarding structures in Oklahoma

    USGS Publications Warehouse

    Tortorelli, Robert L.

    1997-01-01

    Statewide regression equations for Oklahoma were determined for estimating peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years for ungaged sites on natural unregulated streams. The most significant independent variables required to estimate peak-streamflow frequency for natural unregulated streams in Oklahoma are contributing drainage area, main-channel slope, and mean-annual precipitation. The regression equations are applicable for watersheds with drainage areas less than 2,510 square miles that are not affected by regulation from manmade works. Limitations on the use of the regression relations and the reliability of regression estimates for natural unregulated streams are discussed. Log-Pearson Type III analysis information, basin and climatic characteristics, and the peak-stream-flow frequency estimates for 251 gaging stations in Oklahoma and adjacent states are listed. Techniques are presented to make a peak-streamflow frequency estimate for gaged sites on natural unregulated streams and to use this result to estimate a nearby ungaged site on the same stream. For ungaged sites on urban streams, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. For ungaged sites on streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. The statewide regression equations are adjusted by substituting the drainage area below the floodwater retarding structures, or drainage area that represents the percentage of the unregulated basin, in the contributing drainage area parameter to obtain peak-streamflow frequency estimates.

  2. Objective Lightning Probability Forecasting for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Wheeler, Mark

    2005-01-01

    Five logistic regression equations were created that predict the probability of cloud-to-ground lightning occurrence for the day in the KSC/CCAFS area for each month in the warm season. These equations integrated the results from several studies over recent years to improve thunderstorm forecasting at KSC/CCAFS. All of the equations outperform persistence, which is known to outperform NPTI, the current objective tool used in 45 WS lightning forecasting operations. The equations also performed well in other tests. As a result, the new equations will be added to the current set of tools used by the 45 WS to determine the probability of lightning for their daily planning forecast. The results from these equations are meant to be used as first-guess guidance when developing the lightning probability forecast for the day. They provide an objective base from which forecasters can use other observations, model data, consultation with other forecasters, and their own experience to create the final lightning probability for the 1100 UTC briefing.

  3. A stockability equation for forest land in Siskiyou County, California.

    Treesearch

    Neil. McKay

    1985-01-01

    An equation is presented that estimates the relative stocking capacity of forest land in Siskiyou County, California, from the amount of precipitation and the presence of significant indicator plants. The equation is a toot for identifying sites incapable of supporting normal stocking. Estimated relative stocking capacity may be used to discount normal yields to levels...

  4. Comparison of estimated and experimental subaqueous seed transport.

    Treesearch

    Scott Markwith; David Leigh

    2011-01-01

    We compare the estimates from the relative bed stability (RBS) equation that indicates incipient bed movement, and the inertial settling (‘Impact’) law and Wu and Wang (2006) settling velocity equations that indicate suspended particle movement, to flume and settling velocity observations to confirm the utility of the equations for subaqueous hydrochory analyses, and...

  5. An Improved Estimation Using Polya-Gamma Augmentation for Bayesian Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.

    2018-01-01

    Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…

  6. Estimating leaf area and leaf biomass of open-grown deciduous urban trees

    Treesearch

    David J. Nowak

    1996-01-01

    Logarithmic regression equations were developed to predict leaf area and leaf biomass for open-grown deciduous urban trees based on stem diameter and crown parameters. Equations based on crown parameters produced more reliable estimates. The equations can be used to help quantify forest structure and functions, particularly in urbanizing and urban/suburban areas.

  7. Biomass estimators for thinned second-growth ponderosa pine trees.

    Treesearch

    P.H. Cochran; J.W. Jennings; C.T. Youngberg

    1984-01-01

    Usable estimates of the mass of live foliage and limbs of sapling and pole-sized ponderosa pine in managed stands in central Oregon can be obtained with equations using the logarithm of diameter as the only independent variable. These equations produce only slightly higher root mean square deviations than equations that include additional independent variables. A...

  8. Estimated Satellite Cluster Elements in Near Circular Orbit

    DTIC Science & Technology

    1988-12-01

    cluster is investigated. TheAon-board estimator is the U-D covariance factor’xzatiion’filter with dynamics based on the Clohessy - Wiltshire equations...Appropriate values for the velocity vector vi can be found irom the Clohessy - Wiltshire equations [9] (these equations will be explained in detail in the...explained in this text is the f matrix. The state transition matrix was developed from the Clohessy - Wiltshire equations of motion [9:page 3] as i - 2qý

  9. Regression Equations for Estimating Flood Flows at Selected Recurrence Intervals for Ungaged Streams in Pennsylvania

    USGS Publications Warehouse

    Roland, Mark A.; Stuckey, Marla H.

    2008-01-01

    Regression equations were developed for estimating flood flows at selected recurrence intervals for ungaged streams in Pennsylvania with drainage areas less than 2,000 square miles. These equations were developed utilizing peak-flow data from 322 streamflow-gaging stations within Pennsylvania and surrounding states. All stations used in the development of the equations had 10 or more years of record and included active and discontinued continuous-record as well as crest-stage partial-record stations. The state was divided into four regions, and regional regression equations were developed to estimate the 2-, 5-, 10-, 50-, 100-, and 500-year recurrence-interval flood flows. The equations were developed by means of a regression analysis that utilized basin characteristics and flow data associated with the stations. Significant explanatory variables at the 95-percent confidence level for one or more regression equations included the following basin characteristics: drainage area; mean basin elevation; and the percentages of carbonate bedrock, urban area, and storage within a basin. The regression equations can be used to predict the magnitude of flood flows for specified recurrence intervals for most streams in the state; however, they are not valid for streams with drainage areas generally greater than 2,000 square miles or with substantial regulation, diversion, or mining activity within the basin. Estimates of flood-flow magnitude and frequency for streamflow-gaging stations substantially affected by upstream regulation are also presented.

  10. Estimating Renal Function in the Elderly Malaysian Patients Attending Medical Outpatient Clinic: A Comparison between Creatinine Based and Cystatin-C Based Equations.

    PubMed

    Jalalonmuhali, Maisarah; Elagel, Salma Mohamed Abouzriba; Tan, Maw Pin; Lim, Soo Kun; Ng, Kok Peng

    2018-01-01

    To assess the performance of different GFR estimating equations, test the diagnostic value of serum cystatin-C, and compare the applicability of cystatin-C based equation with serum creatinine based equation for estimating GFR (eGFR) in comparison with measured GFR in the elderly Malaysian patients. A cross-sectional study recruiting volunteered patients 65 years and older attending medical outpatient clinic. 51 chromium EDTA ( 51 Cr-EDTA) was used as measured GFR. The predictive capabilities of Cockcroft-Gault equation corrected for body surface area (CGBSA), four-variable Modification of Diet in Renal Disease (4-MDRD), and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations using serum creatinine (CKD-EPIcr) as well as serum cystatin-C (CKD-EPIcys) were calculated. A total of 40 patients, 77.5% male, with mean measured GFR 41.2 ± 18.9 ml/min/1.73 m 2 were enrolled. Mean bias was the smallest for 4-MDRD; meanwhile, CKD-EPIcr had the highest precision and accuracy with lower limit of agreement among other equations. CKD-EPIcys equation did not show any improvement in GFR estimation in comparison to CKD-EPIcr and MDRD. The CKD-EPIcr formula appears to be more accurate and correlates better with measured GFR in this cohort of elderly patients.

  11. Nationwide summary of US Geological Survey regional regression equations for estimating magnitude and frequency of floods for ungaged sites, 1993

    USGS Publications Warehouse

    Jennings, M.E.; Thomas, W.O.; Riggs, H.C.

    1994-01-01

    For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.

  12. Equations for estimating synthetic unit-hydrograph parameter values for small watersheds in Lake County, Illinois

    USGS Publications Warehouse

    Melching, C.S.; Marquardt, J.S.

    1997-01-01

    Design hydrographs computed from design storms, simple models of abstractions (interception, depression storage, and infiltration), and synthetic unit hydrographs provide vital information for stormwater, flood-plain, and water-resources management throughout the United States. Rainfall and runoff data for small watersheds in Lake County collected between 1990 and 1995 were studied to develop equations for estimation of synthetic unit-hydrograph parameters on the basis of watershed and storm characteristics. The synthetic unit-hydrograph parameters of interest were the time of concentration (TC) and watershed-storage coefficient (R) for the Clark unit-hydrograph method, the unit-graph lag (UL) for the Soil Conservation Service (now known as the Natural Resources Conservation Service) dimensionless unit hydrograph, and the hydrograph-time lag (TL) for the linear-reservoir method for unit-hydrograph estimation. Data from 66 storms with effective-precipitation depths greater than 0.4 inches on 9 small watersheds (areas between 0.06 and 37 square miles (mi2)) were utilized to develop the estimation equations, and data from 11 storms on 8 of these watersheds were utilized to verify (test) the estimation equations. The synthetic unit-hydrograph parameters were determined by calibration using the U.S. Army Corps of Engineers Flood Hydrograph Package HEC-1 (TC, R, and UL) or by manual analysis of the rainfall and run-off data (TL). The relation between synthetic unit-hydrograph parameters, and watershed and storm characteristics was determined by multiple linear regression of the logarithms of the parameters and characteristics. Separate sets of equations were developed with watershed area and main channel length as the starting parameters. Percentage of impervious cover, main channel slope, and depth of effective precipitation also were identified as important characteristics for estimation of synthetic unit-hydrograph parameters. The estimation equations utilizing area had multiple correlation coefficients of 0.873, 0.961, 0.968, and 0.963 for TC, R, UL, and TL, respectively, and the estimation equations utilizing main channel length had multiple correlation coefficients of 0.845, 0.957, 0.961, and 0.963 for TC, R, UL, and TL, respectively. Simulation of the measured hydrographs for the verification storms utilizing TC and R obtained from the estimation equations yielded good results without calibration. The peak discharge for 8 of the 11 storms was estimated within 25 percent and the time-to-peak discharge for 10 of the 11 storms was estimated within 20 percent. Thus, application of the estimation equations to determine synthetic unit-hydrograph parameters for design-storm simulation may result in reliable design hydrographs; as long as the physical characteristics of the watersheds under consideration are within the range of those for the watersheds in this study (area: 0.06-37 mi2, main channel length: 0.33-16.6 miles, main channel slope: 3.13-55.3 feet per mile, and percentage of impervious cover: 7.32-40.6 percent). The estimation equations are most reliable when applied to watersheds with areas less than 25 mi2.

  13. Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study

    USDA-ARS?s Scientific Manuscript database

    Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...

  14. Estimating air drying times of lumber with multiple regression

    Treesearch

    William T. Simpson

    2004-01-01

    In this study, the applicability of a multiple regression equation for estimating air drying times of red oak, sugar maple, and ponderosa pine lumber was evaluated. The equation allows prediction of estimated air drying times from historic weather records of temperature and relative humidity at any desired location.

  15. Estimating total forest biomass in Maine, 1995

    Treesearch

    Eric H. Wharton; Douglas M. Griffith; Douglas M. Griffith

    1998-01-01

    Presents methods for synthesizing information from existing biomass literature for estimating biomass over extensive forest areas with specific applications to Maine. Tables of appropriate regression equations and the tree and shrub species to which these equations can be applied are presented as well as biomass estimates at the county and state level.

  16. Estimating northern red oak crown component weights in the Northeastern United States.

    Treesearch

    Robert M. Loomis; Richard W. Blank

    1981-01-01

    Equations are described for estimating crown weights for northern red oak trees. These estimates are for foliage and branchwood weights. Branchwood (wood plus bark) amounts are subdivided by living and dead material into four size groups. Applicability of the equations for other species is examined.

  17. Development of regression equations to revise estimates of historical streamflows for the St. Croix River at Stillwater, Minnesota (water years 1910-2011), and Prescott, Wisconsin (water years 1910-2007)

    USGS Publications Warehouse

    Ziegeweid, Jeffrey R.; Magdalene, Suzanne

    2015-01-01

    The new regression equations were used to calculate revised estimates of historical streamflows for Stillwater and Prescott starting in 1910 and ending when index-velocity streamgages were installed. Monthly, annual, 30-year, and period of record statistics were examined between previous and revised estimates of historical streamflows. The abilities of the new regression equations to estimate historical streamflows were evaluated by using percent differences to compare new estimates of historical daily streamflows to discrete streamflow measurements made at Stillwater and Prescott before the installation of index-velocity streamgages. Although less variability was observed between estimated and measured streamflows at Stillwater compared to Prescott, the percent difference data indicated that the new estimates closely approximated measured streamflows at both locations.

  18. Use of digital land-cover data from the Landsat satellite in estimating streamflow characteristics in the Cumberland Plateau of Tennessee

    USGS Publications Warehouse

    Hollyday, E.F.; Hansen, G.R.

    1983-01-01

    Streamflow may be estimated with regression equations that relate streamflow characteristics to characteristics of the drainage basin. A statistical experiment was performed to compare the accuracy of equations using basin characteristics derived from maps and climatological records (control group equations) with the accuracy of equations using basin characteristics derived from Landsat data as well as maps and climatological records (experimental group equations). Results show that when the equations in both groups are arranged into six flow categories, there is no substantial difference in accuracy between control group equations and experimental group equations for this particular site where drainage area accounts for more than 90 percent of the variance in all streamflow characteristics (except low flows and most annual peak logarithms). (USGS)

  19. Assessing Glomerular Filtration Rate in Hospitalized Patients: A Comparison Between CKD-EPI and Four Cystatin C-Based Equations

    PubMed Central

    de la Torre, Judith; Ramos, Natalia; Quiroz, Augusto; Garjau, Maria; Torres, Irina; Azancot, M. Antonia; López, Montserrat; Sobrado, Ana

    2011-01-01

    Summary Background and objectives A specific method is required for estimating glomerular filtration rate GFR in hospitalized patients. Our objective was to validate the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation and four cystatin C (CysC)–based equations in this setting. Design, setting, participants, & measurements This was an epidemiologic, cross-sectional study in a random sample of hospitalized patients (n = 3114). We studied the accuracy of the CKD-EPI and four CysC-based equations—based on (1) CysC alone or (2) adjusted by gender; (3) age, gender, and race; and (4) age, gender, race, and creatinine, respectively—compared with GFR measured by iohexol clearance (mGFR). Clinical, biochemical, and nutritional data were also collected. Results The CysC equation 3 significantly overestimated the GFR (bias of 7.4 ml/min per 1.73 m2). Most of the error in creatinine-based equations was attributable to calculated muscle mass, which depended on patient's nutritional status. In patients without malnutrition or reduced body surface area, the CKD-EPI equation adequately estimated GFR. Equations based on CysC gave more precise mGFR estimates when malnutrition, extensive reduction of body surface area, or loss of muscle mass were present (biases of 1 and 1.3 ml/min per 1.73 m2 for equations 2 and 4, respectively, versus 5.9 ml/min per 1.73 m2 for CKD-EPI). Conclusions These results suggest that the use of equations based on CysC and gender, or CysC, age, gender, and race, is more appropriate in hospitalized patients to estimate GFR, since these equations are much less dependent on patient's nutritional status or muscle mass than the CKD-EPI equation. PMID:21852668

  20. Sequential reconstruction of driving-forces from nonlinear nonstationary dynamics

    NASA Astrophysics Data System (ADS)

    Güntürkün, Ulaş

    2010-07-01

    This paper describes a functional analysis-based method for the estimation of driving-forces from nonlinear dynamic systems. The driving-forces account for the perturbation inputs induced by the external environment or the secular variations in the internal variables of the system. The proposed algorithm is applicable to the problems for which there is too little or no prior knowledge to build a rigorous mathematical model of the unknown dynamics. We derive the estimator conditioned on the differentiability of the unknown system’s mapping, and smoothness of the driving-force. The proposed algorithm is an adaptive sequential realization of the blind prediction error method, where the basic idea is to predict the observables, and retrieve the driving-force from the prediction error. Our realization of this idea is embodied by predicting the observables one-step into the future using a bank of echo state networks (ESN) in an online fashion, and then extracting the raw estimates from the prediction error and smoothing these estimates in two adaptive filtering stages. The adaptive nature of the algorithm enables to retrieve both slowly and rapidly varying driving-forces accurately, which are illustrated by simulations. Logistic and Moran-Ricker maps are studied in controlled experiments, exemplifying chaotic state and stochastic measurement models. The algorithm is also applied to the estimation of a driving-force from another nonlinear dynamic system that is stochastic in both state and measurement equations. The results are judged by the posterior Cramer-Rao lower bounds. The method is finally put into test on a real-world application; extracting sun’s magnetic flux from the sunspot time series.

  1. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Treesearch

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  2. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  3. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    PubMed Central

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  4. Comparison of predictive equations for resting metabolic rate in healthy nonobese and obese adults: a systematic review.

    PubMed

    Frankenfield, David; Roth-Yousey, Lori; Compher, Charlene

    2005-05-01

    An assessment of energy needs is a necessary component in the development and evaluation of a nutrition care plan. The metabolic rate can be measured or estimated by equations, but estimation is by far the more common method. However, predictive equations might generate errors large enough to impact outcome. Therefore, a systematic review of the literature was undertaken to document the accuracy of predictive equations preliminary to deciding on the imperative to measure metabolic rate. As part of a larger project to determine the role of indirect calorimetry in clinical practice, an evidence team identified published articles that examined the validity of various predictive equations for resting metabolic rate (RMR) in nonobese and obese people and also in individuals of various ethnic and age groups. Articles were accepted based on defined criteria and abstracted using evidence analysis tools developed by the American Dietetic Association. Because these equations are applied by dietetics practitioners to individuals, a key inclusion criterion was research reports of individual data. The evidence was systematically evaluated, and a conclusion statement and grade were developed. Four prediction equations were identified as the most commonly used in clinical practice (Harris-Benedict, Mifflin-St Jeor, Owen, and World Health Organization/Food and Agriculture Organization/United Nations University [WHO/FAO/UNU]). Of these equations, the Mifflin-St Jeor equation was the most reliable, predicting RMR within 10% of measured in more nonobese and obese individuals than any other equation, and it also had the narrowest error range. No validation work concentrating on individual errors was found for the WHO/FAO/UNU equation. Older adults and US-residing ethnic minorities were underrepresented both in the development of predictive equations and in validation studies. The Mifflin-St Jeor equation is more likely than the other equations tested to estimate RMR to within 10% of that measured, but noteworthy errors and limitations exist when it is applied to individuals and possibly when it is generalized to certain age and ethnic groups. RMR estimation errors would be eliminated by valid measurement of RMR with indirect calorimetry, using an evidence-based protocol to minimize measurement error. The Expert Panel advises clinical judgment regarding when to accept estimated RMR using predictive equations in any given individual. Indirect calorimetry may be an important tool when, in the judgment of the clinician, the predictive methods fail an individual in a clinically relevant way. For members of groups that are greatly underrepresented by existing validation studies of predictive equations, a high level of suspicion regarding the accuracy of the equations is warranted.

  5. Selected Logistics Models and Techniques.

    DTIC Science & Technology

    1984-09-01

    TI - 59 Programmable Calculator LCC...Program 27 TI - 59 Programmable Calculator LCC Model 30 Unmanned Spacecraft Cost Model 31 iv I: TABLE OF CONTENTS (CONT’D) (Subject Index) LOGISTICS...34"" - % - "° > - " ° .° - " .’ > -% > ]*° - LOGISTICS ANALYSIS MODEL/TECHNIQUE DATA MODEL/TECHNIQUE NAME: TI - 59 Programmable Calculator LCC Model TYPE MODEL: Cost Estimating DEVELOPED BY:

  6. Regression equations for estimating flood flows for the 2-, 10-, 25-, 50-, 100-, and 500-Year recurrence intervals in Connecticut

    USGS Publications Warehouse

    Ahearn, Elizabeth A.

    2004-01-01

    Multiple linear-regression equations were developed to estimate the magnitudes of floods in Connecticut for recurrence intervals ranging from 2 to 500 years. The equations can be used for nonurban, unregulated stream sites in Connecticut with drainage areas ranging from about 2 to 715 square miles. Flood-frequency data and hydrologic characteristics from 70 streamflow-gaging stations and the upstream drainage basins were used to develop the equations. The hydrologic characteristics?drainage area, mean basin elevation, and 24-hour rainfall?are used in the equations to estimate the magnitude of floods. Average standard errors of prediction for the equations are 31.8, 32.7, 34.4, 35.9, 37.6 and 45.0 percent for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals, respectively. Simplified equations using only one hydrologic characteristic?drainage area?also were developed. The regression analysis is based on generalized least-squares regression techniques. Observed flows (log-Pearson Type III analysis of the annual maximum flows) from five streamflow-gaging stations in urban basins in Connecticut were compared to flows estimated from national three-parameter and seven-parameter urban regression equations. The comparison shows that the three- and seven- parameter equations used in conjunction with the new statewide equations generally provide reasonable estimates of flood flows for urban sites in Connecticut, although a national urban flood-frequency study indicated that the three-parameter equations significantly underestimated flood flows in many regions of the country. Verification of the accuracy of the three-parameter or seven-parameter national regression equations using new data from Connecticut stations was beyond the scope of this study. A technique for calculating flood flows at streamflow-gaging stations using a weighted average also is described. Two estimates of flood flows?one estimate based on the log-Pearson Type III analyses of the annual maximum flows at the gaging station, and the other estimate from the regression equation?are weighted together based on the years of record at the gaging station and the equivalent years of record value determined from the regression. Weighted averages of flood flows for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are tabulated for the 70 streamflow-gaging stations used in the regression analysis. Generally, weighted averages give the most accurate estimate of flood flows at gaging stations. An evaluation of the Connecticut's streamflow-gaging network was performed to determine whether the spatial coverage and range of geographic and hydrologic conditions are adequately represented for transferring flood characteristics from gaged to ungaged sites. Fifty-one of 54 stations in the current (2004) network support one or more flood needs of federal, state, and local agencies. Twenty-five of 54 stations in the current network are considered high-priority stations by the U.S. Geological Survey because of their contribution to the longterm understanding of floods, and their application for regionalflood analysis. Enhancements to the network to improve overall effectiveness for regionalization can be made by increasing the spatial coverage of gaging stations, establishing stations in regions of the state that are not well-represented, and adding stations in basins with drainage area sizes not represented. Additionally, the usefulness of the network for characterizing floods can be maintained and improved by continuing operation at the current stations because flood flows can be more accurately estimated at stations with continuous, long-term record.

  7. Process model comparison and transferability across bioreactor scales and modes of operation for a mammalian cell bioprocess.

    PubMed

    Craven, Stephen; Shirsat, Nishikant; Whelan, Jessica; Glennon, Brian

    2013-01-01

    A Monod kinetic model, logistic equation model, and statistical regression model were developed for a Chinese hamster ovary cell bioprocess operated under three different modes of operation (batch, bolus fed-batch, and continuous fed-batch) and grown on two different bioreactor scales (3 L bench-top and 15 L pilot-scale). The Monod kinetic model was developed for all modes of operation under study and predicted cell density, glucose glutamine, lactate, and ammonia concentrations well for the bioprocess. However, it was computationally demanding due to the large number of parameters necessary to produce a good model fit. The transferability of the Monod kinetic model structure and parameter set across bioreactor scales and modes of operation was investigated and a parameter sensitivity analysis performed. The experimentally determined parameters had the greatest influence on model performance. They changed with scale and mode of operation, but were easily calculated. The remaining parameters, which were fitted using a differential evolutionary algorithm, were not as crucial. Logistic equation and statistical regression models were investigated as alternatives to the Monod kinetic model. They were less computationally intensive to develop due to the absence of a large parameter set. However, modeling of the nutrient and metabolite concentrations proved to be troublesome due to the logistic equation model structure and the inability of both models to incorporate a feed. The complexity, computational load, and effort required for model development has to be balanced with the necessary level of model sophistication when choosing which model type to develop for a particular application. Copyright © 2012 American Institute of Chemical Engineers (AIChE).

  8. [Proposal for a new funding system for mental health departments. Results from an evaluative multicentre Italian study (I-psycost)].

    PubMed

    Grigoletti, Laura; Amaddeo, Francesco; Grassi, Aldrigo; Boldrini, Massimo; Chiappelli, Marco; Percudani, Mauro; Catapano, Francesco; Fiorillo, Andrea; Bartoli, Luca; Bacigalupi, Maurizio; Albanese, Paolo; Simonetti, Simona; Perali, Federico; De Agostini, Paola; Tansella, Michele

    2006-01-01

    To obtain a new, well-balanced mental health funding system, through the creation of (i) a list of psychiatric interventions provided by Italian Community-based Psychiatric Services (CPS), and associated costs; (ii) a new prospective funding system for patients with a high use of resources, based on packages of care. Five Italian Community-based Psychiatric Services collected data from 1250 patients during October 2002. Socio-demographical and clinical characteristics and GAF scores were collected at baseline. All psychiatric contacts during the following six months were registered and categorised into 24 service contact types. Using elasticity equation and contact characteristics, we estimate the costs of care. Cluster analysis techniques identified packages of care. Logistic regression defined predictive variables of high use patients. Multinomial Logistic Model assigned each patient to a package of care. The sample's socio-demographic characteristics are similar, but variations exist between the different CPS. Patients were then divided into two groups, and the group with the highest use of resources was divided into three smaller groups, based on number and type of services provided. Our findings show how is possible to develop a cost predictive model to assign patients with a high use of resources to a group that can provide the right level of care. For these patients it might be possible to apply a prospective per-capita funding system based on packages of care.

  9. Development of an Algorithm for Stroke Prediction: A National Health Insurance Database Study in Korea.

    PubMed

    Min, Seung Nam; Park, Se Jin; Kim, Dong Joon; Subramaniyam, Murali; Lee, Kyung-Sun

    2018-01-01

    Stroke is the second leading cause of death worldwide and remains an important health burden both for the individuals and for the national healthcare systems. Potentially modifiable risk factors for stroke include hypertension, cardiac disease, diabetes, and dysregulation of glucose metabolism, atrial fibrillation, and lifestyle factors. We aimed to derive a model equation for developing a stroke pre-diagnosis algorithm with the potentially modifiable risk factors. We used logistic regression for model derivation, together with data from the database of the Korea National Health Insurance Service (NHIS). We reviewed the NHIS records of 500,000 enrollees. For the regression analysis, data regarding 367 stroke patients were selected. The control group consisted of 500 patients followed up for 2 consecutive years and with no history of stroke. We developed a logistic regression model based on information regarding several well-known modifiable risk factors. The developed model could correctly discriminate between normal subjects and stroke patients in 65% of cases. The model developed in the present study can be applied in the clinical setting to estimate the probability of stroke in a year and thus improve the stroke prevention strategies in high-risk patients. The approach used to develop the stroke prevention algorithm can be applied for developing similar models for the pre-diagnosis of other diseases. © 2018 S. Karger AG, Basel.

  10. Maximal regularity in lp spaces for discrete time fractional shifted equations

    NASA Astrophysics Data System (ADS)

    Lizama, Carlos; Murillo-Arcila, Marina

    2017-09-01

    In this paper, we are presenting a new method based on operator-valued Fourier multipliers to characterize the existence and uniqueness of ℓp-solutions for discrete time fractional models in the form where A is a closed linear operator defined on a Banach space X and Δα denotes the Grünwald-Letnikov fractional derivative of order α > 0. If X is a UMD space, we provide this characterization only in terms of the R-boundedness of the operator-valued symbol associated to the abstract model. To illustrate our results, we derive new qualitative properties of nonlinear difference equations with shiftings, including fractional versions of the logistic and Nagumo equations.

  11. The Rangeland Hydrology and Erosion Model: A Dynamic Approach for Predicting Soil Loss on Rangelands

    NASA Astrophysics Data System (ADS)

    Hernandez, Mariano; Nearing, Mark A.; Al-Hamdan, Osama Z.; Pierson, Frederick B.; Armendariz, Gerardo; Weltz, Mark A.; Spaeth, Kenneth E.; Williams, C. Jason; Nouwakpo, Sayjro K.; Goodrich, David C.; Unkrich, Carl L.; Nichols, Mary H.; Holifield Collins, Chandra D.

    2017-11-01

    In this study, we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed against data collected from 23 runoff and sediment events in a shrub-dominated semiarid watershed in Arizona, USA. To evaluate the model, two sets of primary model parameters were determined using the RHEM V2.3 and RHEM V1.0 parameter estimation equations. Testing of the parameters indicated that RHEM V2.3 parameter estimation equations provided a 76% improvement over RHEM V1.0 parameter estimation equations. Second, the RHEM V2.3 model was calibrated to measurements from the watershed. The parameters estimated by the new equations were within the lowest and highest values of the calibrated parameter set. These results suggest that the new parameter estimation equations can be applied for this environment to predict sediment yield at the hillslope scale. Furthermore, we also applied the RHEM V2.3 to demonstrate the response of the model as a function of foliar cover and ground cover for 124 data points across Arizona and New Mexico. The dependence of average sediment yield on surface ground cover was moderately stronger than that on foliar cover. These results demonstrate that RHEM V2.3 predicts runoff volume, peak runoff, and sediment yield with sufficient accuracy for broad application to assess and manage rangeland systems.

  12. Peak-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.

  13. August Median Streamflow on Ungaged Streams in Eastern Aroostook County, Maine

    USGS Publications Warehouse

    Lombard, Pamela J.; Tasker, Gary D.; Nielsen, Martha G.

    2003-01-01

    Methods for estimating August median streamflow were developed for ungaged, unregulated streams in the eastern part of Aroostook County, Maine, with drainage areas from 0.38 to 43 square miles and mean basin elevations from 437 to 1,024 feet. Few long-term, continuous-record streamflow-gaging stations with small drainage areas were available from which to develop the equations; therefore, 24 partial-record gaging stations were established in this investigation. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record stations was applied by relating base-flow measurements at these stations to concurrent daily flows at nearby long-term, continuous-record streamflow- gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for varying periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Twenty-three partial-record stations and one continuous-record station were used for the final regression equations. The basin characteristics of drainage area and mean basin elevation are used in the calculated regression equation for ungaged streams to estimate August median flow. The equation has an average standard error of prediction from -38 to 62 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -40 to 67 percent. Model error is larger than sampling error for both equations, indicating that additional basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow, which can be used when making estimates at partial-record or continuous-record gaging stations, range from 0.03 to 11.7 cubic feet per second or from 0.1 to 0.4 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in the eastern part of Aroostook County, within the range of acceptable explanatory variables, range from 0.03 to 30 cubic feet per second or 0.1 to 0.7 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as mean elevation and drainage area increase.

  14. Structural determinants of inconsistent condom use with clients among migrant sex workers: findings of longitudinal research in an urban canadian setting.

    PubMed

    Sou, Julie; Shannon, Kate; Li, Jane; Nguyen, Paul; Strathdee, Steffanie A; Shoveller, Jean; Goldenberg, Shira M

    2015-06-01

    Migrant women in sex work experience unique risks and protective factors related to their sexual health. Given the dearth of knowledge in high-income countries, we explored factors associated with inconsistent condom use by clients among migrant female sex workers over time in Vancouver, BC. Questionnaire and HIV/sexually transmitted infection testing data from a longitudinal cohort, An Evaluation of Sex Workers Health Access, were collected from 2010 to 2013. Logistic regression using generalized estimating equations was used to model correlates of inconsistent condom use by clients among international migrant sex workers over a 3-year study period. Of 685 participants, analyses were restricted to 182 (27%) international migrants who primarily originated from China. In multivariate generalized estimating equations analyses, difficulty accessing condoms (adjusted odds ratio [AOR], 3.76; 95% confidence interval [CI], 1.13-12.47) independently correlated with increased odds of inconsistent condom use by clients. Servicing clients in indoor sex work establishments (e.g., massage parlors) (AOR, 0.34; 95% CI, 0.15-0.77), and high school attainment (AOR, 0.22; 95% CI, 0.09-0.50) had independent protective effects on the odds of inconsistent condom use by clients. Findings of this longitudinal study highlight the persistent challenges faced by migrant sex workers in terms of accessing and using condoms. Migrant sex workers who experienced difficulty in accessing condoms were more than 3 times as likely to report inconsistent condom use by clients. Laws, policies, and programs promoting access to safer, decriminalized indoor work environments remain urgently needed to promote health, safety, and human rights for migrant workers in the sex industry.

  15. Extracurricular activity participation moderates impact of family and school factors on adolescents' disruptive behavioural problems.

    PubMed

    Driessens, Corine M E F

    2015-11-11

    The prevalence of problem behaviours among British adolescents has increased in the past decades. Following Erikson's psychosocial developmental theory and Bronfenbrenner's developmental ecological model, it was hypothesized that youth problem behaviour is shaped in part by social environment. The aim of this project was to explore potential protective factors within the social environment of British youth's for the presentation of disruptive behavioural problems. This study used secondary data from the Longitudinal Study of Young People in England, a cohort study of secondary school students. These data were analysed with generalized estimation equations to take the correlation between the longitudinal observations into account. Three models were built. The first model determined the effect of family, school, and extracurricular setting on presentation of disruptive behavioural problems. The second model expanded the first model by assuming extracurricular activities as protective factors that moderated the interaction between family and school factors with disruptive behavioural problems. The third model described the effect of prior disruptive behaviour on current disruptive behaviour. Associations were found between school factors, family factors, involvement in extracurricular activities and presence of disruptive behavioural problems. Results from the second generalized estimating equation (GEE) logistic regression models indicated that extracurricular activities buffered the impact of school and family factors on the presence of disruptive behavioural problems. For instance, participation in sports activities decreased the effect of bullying on psychological distress. Results from the third model indicated that prior acts of disruptive behaviour reinforced current disruptive behaviour. This study supports Erikson's psychosocial developmental theory and Bronfenbrenner's developmental ecological model; social environment did influence the presence of disruptive behavioural problems for British adolescents. The potential of extracurricular activities to intervention strategies addressing disruptive behavioural problems of adolescents is discussed.

  16. First-Order System Least-Squares for the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.

  17. Estimation of shortwave hemispherical reflectance (albedo) from bidirectionally reflected radiance data

    NASA Technical Reports Server (NTRS)

    Starks, Patrick J.; Norman, John M.; Blad, Blaine L.; Walter-Shea, Elizabeth A.; Walthall, Charles L.

    1991-01-01

    An equation for estimating albedo from bidirectional reflectance data is proposed. The estimates of albedo are found to be greater than values obtained with simultaneous pyranometer measurements. Particular attention is given to potential sources of systematic errors including extrapolation of bidirectional reflectance data out to a view zenith angle of 90 deg, the use of inappropriate weighting coefficients in the numerator of the albedo equation, surface shadowing caused by the A-frame instrumentation used to measure the incoming and outgoing radiation fluxes, errors in estimates of the denominator of the proposed albedo equation, and a 'hot spot' contribution in bidirectional data measured by a modular multiband radiometer.

  18. Orbital stability and energy estimate of ground states of saturable nonlinear Schrödinger equations with intensity functions in R2

    NASA Astrophysics Data System (ADS)

    Lin, Tai-Chia; Wang, Xiaoming; Wang, Zhi-Qiang

    2017-10-01

    Conventionally, the existence and orbital stability of ground states of nonlinear Schrödinger (NLS) equations with power-law nonlinearity (subcritical case) can be proved by an argument using strict subadditivity of the ground state energy and the concentration compactness method of Cazenave and Lions [4]. However, for saturable nonlinearity, such an argument is not applicable because strict subadditivity of the ground state energy fails in this case. Here we use a convexity argument to prove the existence and orbital stability of ground states of NLS equations with saturable nonlinearity and intensity functions in R2. Besides, we derive the energy estimate of ground states of saturable NLS equations with intensity functions using the eigenvalue estimate of saturable NLS equations without intensity function.

  19. Variational estimate method for solving autonomous ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2018-04-01

    In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.

  20. A comparison of Cox and logistic regression for use in genome-wide association studies of cohort and case-cohort design.

    PubMed

    Staley, James R; Jones, Edmund; Kaptoge, Stephen; Butterworth, Adam S; Sweeting, Michael J; Wood, Angela M; Howson, Joanna M M

    2017-06-01

    Logistic regression is often used instead of Cox regression to analyse genome-wide association studies (GWAS) of single-nucleotide polymorphisms (SNPs) and disease outcomes with cohort and case-cohort designs, as it is less computationally expensive. Although Cox and logistic regression models have been compared previously in cohort studies, this work does not completely cover the GWAS setting nor extend to the case-cohort study design. Here, we evaluated Cox and logistic regression applied to cohort and case-cohort genetic association studies using simulated data and genetic data from the EPIC-CVD study. In the cohort setting, there was a modest improvement in power to detect SNP-disease associations using Cox regression compared with logistic regression, which increased as the disease incidence increased. In contrast, logistic regression had more power than (Prentice weighted) Cox regression in the case-cohort setting. Logistic regression yielded inflated effect estimates (assuming the hazard ratio is the underlying measure of association) for both study designs, especially for SNPs with greater effect on disease. Given logistic regression is substantially more computationally efficient than Cox regression in both settings, we propose a two-step approach to GWAS in cohort and case-cohort studies. First to analyse all SNPs with logistic regression to identify associated variants below a pre-defined P-value threshold, and second to fit Cox regression (appropriately weighted in case-cohort studies) to those identified SNPs to ensure accurate estimation of association with disease.

  1. Bifurcation and Fractal of the Coupled Logistic Map

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luo, Chao

    The nature of the fixed points of the coupled Logistic map is researched, and the boundary equation of the first bifurcation of the coupled Logistic map in the parameter space is given out. Using the quantitative criterion and rule of system chaos, i.e., phase graph, bifurcation graph, power spectra, the computation of the fractal dimension, and the Lyapunov exponent, the paper reveals the general characteristics of the coupled Logistic map transforming from regularity to chaos, the following conclusions are shown: (1) chaotic patterns of the coupled Logistic map may emerge out of double-periodic bifurcation and Hopf bifurcation, respectively; (2) during the process of double-period bifurcation, the system exhibits self-similarity and scale transform invariability in both the parameter space and the phase space. From the research of the attraction basin and Mandelbrot-Julia set of the coupled Logistic map, the following conclusions are indicated: (1) the boundary between periodic and quasiperiodic regions is fractal, and that indicates the impossibility to predict the moving result of the points in the phase plane; (2) the structures of the Mandelbrot-Julia sets are determined by the control parameters, and their boundaries have the fractal characteristic.

  2. Alternative supply specifications and estimates of regional supply and demand for stumpage.

    Treesearch

    Kent P. Connaughton; David H. Jackson; Gerard A. Majerus

    1988-01-01

    Four plausible sets of stumpage supply and demand equations were developed and estimated; the demand equation was the same for each set, although the supply equation differed. The supply specifications varied from the model of regional excess demand in which National Forest harvest levels were assumed fixed to a more realistic model in which the harvest on the National...

  3. Cycle-time equations for five small tractors operating in low-volume small-diameter hardwood stands

    Treesearch

    Chris B. LeDoux; Neil K. Huyler; Neil K. Huyler

    1992-01-01

    Prediction equations for estimating cycle time were developed for five small tractors studied under various silvicultural treatments and operating conditions. The tractors studied included the Pasquali 933, a Holder A60F, a Forest Ant Forwarder (Skogsman), a Massey-Ferguson, and a Sam4 Minitarus. Skidding costs were estimated based on the cycle-time equations. Using...

  4. Estimation of delays and other parameters in nonlinear functional differential equations

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  5. Predictive equations for the estimation of body size in seals and sea lions (Carnivora: Pinnipedia)

    PubMed Central

    Churchill, Morgan; Clementz, Mark T; Kohno, Naoki

    2014-01-01

    Body size plays an important role in pinniped ecology and life history. However, body size data is often absent for historical, archaeological, and fossil specimens. To estimate the body size of pinnipeds (seals, sea lions, and walruses) for today and the past, we used 14 commonly preserved cranial measurements to develop sets of single variable and multivariate predictive equations for pinniped body mass and total length. Principal components analysis (PCA) was used to test whether separate family specific regressions were more appropriate than single predictive equations for Pinnipedia. The influence of phylogeny was tested with phylogenetic independent contrasts (PIC). The accuracy of these regressions was then assessed using a combination of coefficient of determination, percent prediction error, and standard error of estimation. Three different methods of multivariate analysis were examined: bidirectional stepwise model selection using Akaike information criteria; all-subsets model selection using Bayesian information criteria (BIC); and partial least squares regression. The PCA showed clear discrimination between Otariidae (fur seals and sea lions) and Phocidae (earless seals) for the 14 measurements, indicating the need for family-specific regression equations. The PIC analysis found that phylogeny had a minor influence on relationship between morphological variables and body size. The regressions for total length were more accurate than those for body mass, and equations specific to Otariidae were more accurate than those for Phocidae. Of the three multivariate methods, the all-subsets approach required the fewest number of variables to estimate body size accurately. We then used the single variable predictive equations and the all-subsets approach to estimate the body size of two recently extinct pinniped taxa, the Caribbean monk seal (Monachus tropicalis) and the Japanese sea lion (Zalophus japonicus). Body size estimates using single variable regressions generally under or over-estimated body size; however, the all-subset regression produced body size estimates that were close to historically recorded body length for these two species. This indicates that the all-subset regression equations developed in this study can estimate body size accurately. PMID:24916814

  6. Methods for estimating magnitude and frequency of peak flows for natural streams in Utah

    USGS Publications Warehouse

    Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.

    2007-01-01

    Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.

  7. 78 FR 48422 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-08

    ... other related elements of program and logistics support. The estimated cost is $300 million. This... and logistical support. There will be no adverse impact on U.S. defense readiness as a result of this...

  8. The Cost of Ménière's Disease: A Novel Multisource Approach.

    PubMed

    Tyrrell, Jessica; Whinney, David J; Taylor, Timothy

    2016-01-01

    To estimate the annual cost of Ménière's disease and the cost per person in the UK population and to investigate the direct and indirect costs of the condition. The authors utilized a multidata approach to provide the first estimate of the cost of Ménière's. Data from the UK Biobank (a study of 500,000 individuals collected between 2007 and 2012), the Hospital Episode Statistics (data on all hospital admissions in England from 2008 to 2012) and the UK Ménière's Society (2014) were used to estimate the cost of Ménière's. Cases were self-reported in the UK Biobank and UK Ménière's Society, within the Hospital Episode Statistics cases were clinician diagnosed. The authors estimated the direct and indirect costs of the condition, using count data to represent numbers of individuals reporting specific treatments, operations etc. and basic statistical analyses (χ tests, linear and logistic regression) to compare cases and controls in the UK Biobank. Ménière's was estimated to cost between £541.30 million and £608.70 million annually (equivalent to US $829.9 to $934.2 million), equating to £3,341 to £3,757 ($5112 to $5748) per person per annum. The indirect costs were substantial, with loss of earnings contributing to over £400 million per annum. For the first time, the authors were able to estimate the economic burden of Ménière's disease. In the UK, the annual cost of this condition is substantial. Further research is required to develop cost-effective treatments and management strategies for Ménière's to reduce the economic burden of the disease. These findings should be interpreted with caution due to the uncertainties inherent in the analysis.

  9. Microcomputer Simulation of Nonlinear Systems: From Oscillations to Chaos.

    ERIC Educational Resources Information Center

    Raw, Cecil J. G.; Stacey, Larry M.

    1989-01-01

    Presents two short microcomputer programs which illustrate features of nonlinear dynamics, including steady states, periodic oscillations, period doubling, and chaos. Logistic maps are explained, inclusion in undergraduate chemistry and physics courses to teach nonlinear equations is discussed, and applications in social and biological sciences…

  10. Re-evaluating neonatal-age models for ungulates: Does model choice affect survival estimates?

    USGS Publications Warehouse

    Grovenburg, Troy W.; Monteith, Kevin L.; Jacques, Christopher N.; Klaver, Robert W.; DePerno, Christopher S.; Brinkman, Todd J.; Monteith, Kyle B.; Gilbert, Sophie L.; Smith, Joshua B.; Bleich, Vernon C.; Swanson, Christopher C.; Jenks, Jonathan A.

    2014-01-01

    New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001–2009, we captured and radiocollared 174 newborn (≤24-hrs old) ungulates: 76 white-tailed deer (Odocoileus virginianus) in Minnesota and South Dakota, 61 mule deer (O. hemionus) in California, and 37 pronghorn (Antilocapra americana) in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age) in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days) for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days) for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i.e., weekly versus daily) for estimating survival.

  11. On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density

    NASA Astrophysics Data System (ADS)

    Dorini, F. A.; Cecconello, M. S.; Dorini, L. B.

    2016-04-01

    It is recognized that handling uncertainty is essential to obtain more reliable results in modeling and computer simulation. This paper aims to discuss the logistic equation subject to uncertainties in two parameters: the environmental carrying capacity, K, and the initial population density, N0. We first provide the closed-form results for the first probability density function of time-population density, N(t), and its inflection point, t*. We then use the Maximum Entropy Principle to determine both K and N0 density functions, treating such parameters as independent random variables and considering fluctuations of their values for a situation that commonly occurs in practice. Finally, closed-form results for the density functions and statistical moments of N(t), for a fixed t > 0, and of t* are provided, considering the uniform distribution case. We carried out numerical experiments to validate the theoretical results and compared them against that obtained using Monte Carlo simulation.

  12. Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Fan, Xitao

    This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…

  13. Estimating total forest biomass in New York, 1993

    Treesearch

    Eric Wharton; Carol Alerich; David A. Drake; David A. Drake

    1997-01-01

    Presents methods for synthesizing information from existing biomass literature for estimating biomass over extensive forest areas with specific applications to New York. Tables of appropriate regression equations and the tree and shrub species to which these equations can be applied are presented well as biomass estimates at the county, geographic unit, and state level...

  14. Urban stormwater quality, event-mean concentrations, and estimates of stormwater pollutant loads, Dallas-Fort Worth area, Texas, 1992-93

    USGS Publications Warehouse

    Baldys, Stanley; Raines, T.H.; Mansfield, B.L.; Sandlin, J.T.

    1998-01-01

    Local regression equations were developed to estimate loads produced by individual storms. Mean annual loads were estimated by applying the storm-load equations for all runoff-producing storms in an average climatic year and summing individual storm loads to determine the annual load.

  15. Estimation of Missing Water-Level Data for the Everglades Depth Estimation Network (EDEN)

    USGS Publications Warehouse

    Conrads, Paul; Petkewich, Matthew D.

    2009-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface elevation models designed to provide scientists, engineers, and water-resource managers with current (2000-2009) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN and their goal of providing quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the daily water-surface elevation model, water-level estimation equations were developed to fill missing data. To minimize the occurrences of no estimation of data due to missing data for an input station, a minimum of three linear regression equations were developed for each station using different input stations. Of the 726 water-level estimation equations developed to fill missing data at 239 stations, more than 60 percent of the equations have coefficients of determination greater than 0.90, and 92 percent have an coefficient of determination greater than 0.70.

  16. Validity of bioelectrical impedance analysis in estimation of fat-free mass in colorectal cancer patients.

    PubMed

    Ræder, Hanna; Kværner, Ane Sørlie; Henriksen, Christine; Florholmen, Geir; Henriksen, Hege Berg; Bøhn, Siv Kjølsrud; Paur, Ingvild; Smeland, Sigbjørn; Blomhoff, Rune

    2018-02-01

    Bioelectrical impedance analysis (BIA) is an accessible and cheap method to measure fat-free mass (FFM). However, BIA estimates are subject to uncertainty in patient populations with altered body composition and hydration. The aim of the current study was to validate a whole-body and a segmental BIA device against dual-energy X-ray absorptiometry (DXA) in colorectal cancer (CRC) patients, and to investigate the ability of different empiric equations for BIA to predict DXA FFM (FFM DXA ). Forty-three non-metastatic CRC patients (aged 50-80 years) were enrolled in this study. Whole-body and segmental BIA FFM estimates (FFM whole-bodyBIA , FFM segmentalBIA ) were calculated using 14 empiric equations, including the equations from the manufacturers, before comparison to FFM DXA estimates. Strong linear relationships were observed between FFM BIA and FFM DXA estimates for all equations (R 2  = 0.94-0.98 for both devices). However, there were large discrepancies in FFM estimates depending on the equations used with mean differences in the ranges -6.5-6.8 kg and -11.0-3.4 kg for whole-body and segmental BIA, respectively. For whole-body BIA, 77% of BIA derived FFM estimates were significantly different from FFM DXA , whereas for segmental BIA, 85% were significantly different. For whole-body BIA, the Schols* equation gave the highest agreement with FFM DXA with mean difference ±SD of -0.16 ± 1.94 kg (p = 0.582). The manufacturer's equation gave a small overestimation of FFM with 1.46 ± 2.16 kg (p < 0.001) with a tendency towards proportional bias (r = 0.28, p = 0.066). For segmental BIA, the Heitmann* equation gave the highest agreement with FFM DXA (0.17 ± 1.83 kg (p = 0.546)). Using the manufacturer's equation, no difference in FFM estimates was observed (-0.34 ± 2.06 kg (p = 0.292)), however, a clear proportional bias was detected (r = 0.69, p < 0.001). Both devices demonstrated acceptable ability to detect low FFM compared to DXA using the optimal equation. In a population of non-metastatic CRC patients, mostly consisting of Caucasian adults and with a wide range of body composition measures, both the whole-body BIA and segmental BIA device provide FFM estimates that are comparable to FFM DXA on a group level when the appropriate equations are applied. At the individual level (i.e. in clinical practice) BIA may be a valuable tool to identify patients with low FFM as part of a malnutrition diagnosis. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  17. Iohexol clearance is superior to creatinine-based renal function estimating equations in detecting short-term renal function decline in chronic heart failure

    PubMed Central

    Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D.; Macdougall, Iain C.; Ponikowski, Piotr; Lainscak, Mitja

    2015-01-01

    Aim To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Methods Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Results Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P = 0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P = 0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Conclusions Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number NCT01829880 PMID:26718759

  18. The National Flood Frequency Program, version 3 : a computer program for estimating magnitude and frequency of floods for ungaged sites

    USGS Publications Warehouse

    Ries, Kernell G.; Crouse, Michele Y.

    2002-01-01

    For many years, the U.S. Geological Survey (USGS) has been developing regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, these equations have been developed on a Statewide or metropolitan-area basis as part of cooperative study programs with specific State Departments of Transportation. In 1994, the USGS released a computer program titled the National Flood Frequency Program (NFF), which compiled all the USGS available regression equations for estimating the magnitude and frequency of floods in the United States and Puerto Rico. NFF was developed in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency. Since the initial release of NFF, the USGS has produced new equations for many areas of the Nation. A new version of NFF has been developed that incorporates these new equations and provides additional functionality and ease of use. NFF version 3 provides regression-equation estimates of flood-peak discharges for unregulated rural and urban watersheds, flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals. The Program also provides weighting techniques to improve estimates of flood-peak discharges for gaging stations and ungaged sites. The information provided by NFF should be useful to engineers and hydrologists for planning and design applications. This report describes the flood-regionalization techniques used in NFF and provides guidance on the applicability and limitations of the techniques. The NFF software and the documentation for the regression equations included in NFF are available at http://water.usgs.gov/software/nff.html.

  19. Validation of prediction equations for estimating resting energy expenditure in obese Chinese children.

    PubMed

    Chan, Dorothy F Y; Li, Albert M; Chan, Michael H M; So, Hung Kwan; Chan, Iris H S; Yin, Jane A T; Lam, Christopher W K; Fok, Tai Fai; Nelson, Edmund A S

    2009-01-01

    (1) To examine the validity of existing prediction equations (PREE) for estimating resting energy expenditure (REE) in obese Chinese children, (2) to correlate the measured REE (MREE) with anthropometric and biochemical parameters and (3) to derive a new PREE for local use. Cross-sectional study. 100 obese children (71 boys) were studied. All subjects underwent physical examination and anthropometric measurement. Upper and central body fat distribution was signified by centrality and conicity index respectively, and REE was measured by indirect calorimetry. Fat free mass (FFM) were measured by DEXA scan. Thirteen existing prediction equations for estimating REE were compared with MREE among these obese children. Fasting blood for glucose, lipid profile and insulin were obtained. The overall, male and female median MREEs were 7.1 mJ/d (IR 6.2-8.4), 7.3 mJ/d (IR 6.3-9.7) and 6.9 mJ/d (IR 5.6-8.1) respectively. No sex difference was noted in MREE (p=0.203). Most of the equations except Schofield equation underestimated REE of our children. By multiple linear regression, MREE was positively correlated with FFM (p<0.0001), conicity index (p<0.001) and centrality index (p=0.001). A new equation for estimating REE for local use was derived as: REE=(17.4*logFFM)+(11.4*conicity index)-(2.4*centrality index)-31.3. The mean difference of new PREE-MREE was -0.011 mJ/d (SD 1.51) with an interclass correlation coefficient of 0.91. None of the existing prediction equations were accurate in their estimation of REE, when applied to obese Chinese children. A new prediction equation has been derived for local use.

  20. Accounting for Epistemic Uncertainty in Mission Supportability Assessment: A Necessary Step in Understanding Risk and Logistics Requirements

    NASA Technical Reports Server (NTRS)

    Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William

    2017-01-01

    Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.

  1. A new model for simulating microbial cyanide production and optimizing the medium parameters for recovering precious metals from waste printed circuit boards.

    PubMed

    Yuan, Zhihui; Ruan, Jujun; Li, Yaying; Qiu, Rongliang

    2018-04-10

    Bioleaching is a green recycling technology for recovering precious metals from waste printed circuit boards (WPCBs). However, this technology requires increasing cyanide production to obtain desirable recovery efficiency. Luria-Bertani medium (LB medium, containing tryptone 10 g/L, yeast extract 5 g/L, NaCl 10 g/L) was commonly used in bioleaching of precious metal. In this study, results showed that LB medium did not produce highest yield of cyanide. Under optimal culture conditions (25 °C, pH 7.5), the maximum cyanide yield of the optimized medium (containing tryptone 6 g/L and yeast extract 5 g/L) was 1.5 times as high as that of LB medium. In addition, kinetics and relationship of cell growth and cyanide production was studied. Data of cell growth fitted logistics model well. Allometric model was demonstrated effective in describing relationship between cell growth and cyanide production. By inserting logistics equation into allometric equation, we got a novel hybrid equation containing five parameters. Kinetic data for cyanide production were well fitted to the new model. Model parameters reflected both cell growth and cyanide production process. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  3. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans

    PubMed Central

    Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results. PMID:28533960

  4. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans.

    PubMed

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  5. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  6. Stability of equations with a distributed delay, monotone production and nonlinear mortality

    NASA Astrophysics Data System (ADS)

    Berezansky, Leonid; Braverman, Elena

    2013-10-01

    We consider population dynamics models dN/dt = f(N(tτ)) - d(N(t)) with an increasing fecundity function f and any mortality function d which can be quadratic, as in the logistic equation, or have a different form provided that the equation has at most one positive equilibrium. Here the delay in the production term can be distributed and unbounded. It is demonstrated that the positive equilibrium is globally attractive if it exists, otherwise all positive solutions tend to zero. Moreover, we demonstrate that solutions of the equation are intrinsically non-oscillatory: once the initial function is less/greater than the equilibrium K > 0, so is the solution for any positive time value. The assumptions on f, d and the delay are rather nonrestrictive, and several examples demonstrate that none of them can be omitted.

  7. A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.

    1993-01-01

    Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.

  8. A Longitudinal Study on Human Outdoor Decomposition in Central Texas.

    PubMed

    Suckling, Joanna K; Spradley, M Katherine; Godde, Kanya

    2016-01-01

    The development of a methodology that estimates the postmortem interval (PMI) from stages of decomposition is a goal for which forensic practitioners strive. A proposed equation (Megyesi et al. 2005) that utilizes total body score (TBS) and accumulated degree days (ADD) was tested using longitudinal data collected from human remains donated to the Forensic Anthropology Research Facility (FARF) at Texas State University-San Marcos. Exact binomial tests examined the rate of the equation to successfully predict ADD. Statistically significant differences were found between ADD estimated by the equation and the observed value for decomposition stage. Differences remained significant after carnivore scavenged donations were removed from analysis. Low success rates for the equation to predict ADD from TBS and the wide standard errors demonstrate the need to re-evaluate the use of this equation and methodology for PMI estimation in different environments; rather, multivariate methods and equations should be derived that are environmentally specific. © 2015 American Academy of Forensic Sciences.

  9. Solutions for the diurnally forced advection-diffusion equation to estimate bulk fluid velocity and diffusivity in streambeds from temperature time series

    NASA Astrophysics Data System (ADS)

    Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.

    2012-12-01

    Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.

  10. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  11. Mining pharmacovigilance data using Bayesian logistic regression with James-Stein type shrinkage estimation.

    PubMed

    An, Lihua; Fung, Karen Y; Krewski, Daniel

    2010-09-01

    Spontaneous adverse event reporting systems are widely used to identify adverse reactions to drugs following their introduction into the marketplace. In this article, a James-Stein type shrinkage estimation strategy was developed in a Bayesian logistic regression model to analyze pharmacovigilance data. This method is effective in detecting signals as it combines information and borrows strength across medically related adverse events. Computer simulation demonstrated that the shrinkage estimator is uniformly better than the maximum likelihood estimator in terms of mean squared error. This method was used to investigate the possible association of a series of diabetic drugs and the risk of cardiovascular events using data from the Canada Vigilance Online Database.

  12. Evaluation of various equations for estimating renal function in elderly Chinese patients with type 2 diabetes mellitus

    PubMed Central

    Guo, Mei; Niu, Jian-Ying; Ye, Xian-Wu; Han, Xiao-Jie; Zha, Ying; Hong, Yang; Fang, Hong; Gu, Yong

    2017-01-01

    Background The clinical assessment of kidney function based on the estimated glomerular filtration rate (GFR) in older patients remains controversial. This study evaluated the concordance and feasibility of using various creatinine-based equations for estimating GFR in elderly Chinese patients with type 2 diabetes mellitus (T2DM). Methods A cross-sectional analytical study was conducted in 21,723 older diabetic patients (≥60 years) based on electronic health records (EHR) for Minhang District, Shanghai, China. The concordance of chronic kidney disease (CKD) classification among different creatinine-based equations was assessed based on Kappa values, intraclass correlation coefficient (ICC) statistics, and the eGFR agreement between the equations was tested using Bland–Altman plots. The GFR was estimated using the Cockcroft–Gault (CG), Berlin Initiative Study 1 (BIS1), simplified Modification of Diet in Renal Disease (MDRD), MDRD modified for Chinese populations (mMDRD), chronic kidney disease epidemiology collaboration (CKD-EPI), CKD-EPI in Asians (CKD-EPI-Asia), and Ruijin equations. Results Overall, the proportion of CKD stages 3–5 (eGFR <60 mL/min/1.73 m2) was calculated as 28.9%, 39.1%, 11.8%, 8.4%, 14.3%, 11.5%, and 12.7% by the eGFRCG, eGFRBIS1, eGFRMDRD, eGFRmMDRD, eGFRCKD-EPI, eGFRCKD-EPI-Asia, and eGFRRuijin equations, respectively. The concordance of albuminuria and decreased eGFR based on the different equations was poor by both the Kappa (<0.2) and ICC (<0.4) statistics. The CKD-EPI-Asia equation resulted in excellent concordance with the CKD-EPI (ICC =0.931), MDRD (ICC =0.963), mMDRD (ICC =0.892), and Ruijin (ICC =0.956) equations for the classification of CKD stages, whereas the BIS1 equation exhibited good concordance with the CG equation (ICC =0.809). In addition, significant differences were observed for CKD stage 1 among all these equations. Conclusion Accurate GFR values are difficult to estimate using creatinine-based equations in older diabetic patients. Kidney function is complex, and the staff need to be aware of the individualized consideration of other risk factors or markers of reduced renal function in clinical practice. PMID:29070944

  13. Performance of three glomerular filtration rate estimation equations in a population of sub-Saharan Africans with Type 2 diabetes.

    PubMed

    Agoons, D D; Balti, E V; Kaze, F F; Azabji-Kenfack, M; Ashuntantang, G; Kengne, A P; Sobngwi, E; Mbanya, J C

    2016-09-01

    We evaluated the performance of the Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) and Cockcroft-Gault (CG) equations against creatinine clearance (CrCl) to estimate glomerular filtration rate (GFR) in 51 patients with Type 2 diabetes. The CrCl value was obtained from the average of two consecutive 24-h urine samples. Results were adjusted for body surface area using the Dubois formula. Serum creatinine was measured using the kinetic Jaffe method and was calibrated to standardized levels. Bland-Altman analysis and kappa statistic were used to examine agreement between measured and estimated GFR. Estimates of GFR from the CrCl, MDRD, CKD-EPI and CG equations were similar (overall P = 0.298), and MDRD (r = 0.58; 95% CI: 0.36-0.74), CKD-EPI (r = 0.55; 95% CI: 0.33-0.72) and CG (r = 0.61; 95% CI: 0.39-0.75) showed modest correlation with CrCl (all P < 0.001). Bias was -0.3 for MDRD, 1.7 for CKD-EPI and -5.4 for CG. All three equations showed fair-to-moderate agreement with CrCl (kappa: 0.38-0.51). The c-statistic for all three equations ranged between 0.75 and 0.77 with no significant difference (P = 0.639 for c-statistic comparison). The MDRD equation seems to have a modest advantage over CKD-EPI and CG in estimating GFR and detecting impaired renal function in sub-Saharan African patients with Type 2 diabetes. The overall relatively modest correlation with CrCl, however, suggests the need for context-specific estimators of GFR or context adaptation of existing estimators. © 2015 Diabetes UK.

  14. Novel Filtration Markers for GFR Estimation

    PubMed Central

    Inker, Lesley A.; Coresh, Josef; Levey, Andrew S.; Eckfeldt, John H.

    2017-01-01

    Creatinine-based glomerular filtration rate estimation (eGFRcr) has been improved and refined since the 1970s through both the Modification of Diet in Renal Disease (MDRD) Study equation in 1999 and the CKD Epidemiology Collaboration (CKD-EPI) equation in 2009, with current clinical practice dependent primarily on eGFR for accurate assessment of GFR. However, researchers and clinicians have recognized limitations of relying on creatinine as the only filtration marker, which can lead to inaccurate GFR estimates in certain populations due to the influence of non-GFR determinants of serum or plasma creatinine. Therefore, recent literature has proposed incorporation of multiple serum or plasma filtration markers into GFR estimation to improve precision and accuracy and decrease the impact of non-GFR determinants for any individual biomarker. To this end, the CKD-EPI combined creatinine-cystatin C equation (eGFRcr-cys) was developed in 2012 and demonstrated superior accuracy to equations relying on creatinine or cystatin C alone (eGFRcr or eGFRcys). Now, the focus has broadened to include additional novel filtration markers to further refine and improve GFR estimation. Beta-2-microglobulin (B2M) and beta-trace-protein (BTP) are two filtration markers with established assays that have been proposed as candidates for improving both GFR estimation and risk prediction. GFR estimating equations based on B2M and BTP have been developed and validated, with the CKD-EPI combined BTP-B2M equation (eGFRBTP-B2M) demonstrating similar performance to eGFR and eGFR. Additionally, several studies have demonstrated that both B2M and BTP are associated with outcomes in CKD patients, including cardiovascular events, ESRD and mortality. This review will primarily focus on these two biomarkers, and will highlight efforts to identify additional candidate biomarkers through metabolomics-based approaches. PMID:29333147

  15. Estimation of peak-discharge frequency of urban streams in Jefferson County, Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Ruhl, Kevin J.; Moore, Brian L.; Rose, Martin F.

    1997-01-01

    An investigation of flood-hydrograph characteristics for streams in urban Jefferson County, Kentucky, was made to obtain hydrologic information needed for waterresources management. Equations for estimating peak-discharge frequencies for ungaged streams in the county were developed by combining (1) long-term annual peakdischarge data and rainfall-runoff data collected from 1991 to 1995 in 13 urban basins and (2) long-term annual peak-discharge data in four rural basins located in hydrologically similar areas of neighboring counties. The basins ranged in size from 1.36 to 64.0 square miles. The U.S. Geological Survey Rainfall- Runoff Model (RRM) was calibrated for each of the urban basins. The calibrated models were used with long-term, historical rainfall and pan-evaporation data to simulate 79 years of annual peak-discharge data. Peak-discharge frequencies were estimated by fitting the logarithms of the annual peak discharges to a Pearson-Type III frequency distribution. The simulated peak-discharge frequencies were adjusted for improved reliability by application of bias-correction factors derived from peakdischarge frequencies based on local, observed annual peak discharges. The three-parameter and the preferred seven-parameter nationwide urban-peak-discharge regression equations previously developed by USGS investigators provided biased (high) estimates for the urban basins studied. Generalized-least-square regression procedures were used to relate peakdischarge frequency to selected basin characteristics. Regression equations were developed to estimate peak-discharge frequency by adjusting peak-dischargefrequency estimates made by use of the threeparameter nationwide urban regression equations. The regression equations are presented in equivalent forms as functions of contributing drainage area, main-channel slope, and basin development factor, which is an index for measuring the efficiency of the basin drainage system. Estimates of peak discharges for streams in the county can be made for the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals by use of the regression equations. The average standard errors of prediction of the regression equations ranges from ? 34 to ? 45 percent. The regression equations are applicable to ungaged streams in the county having a specific range of basin characteristics.

  16. Estimation of time- and state-dependent delays and other parameters in functional differential equations

    NASA Technical Reports Server (NTRS)

    Murphy, K. A.

    1988-01-01

    A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.

  17. Estimation of time- and state-dependent delays and other parameters in functional differential equations

    NASA Technical Reports Server (NTRS)

    Murphy, K. A.

    1990-01-01

    A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.

  18. Discrete dynamical laser equation for the critical onset of bistability, entanglement and disappearance

    NASA Astrophysics Data System (ADS)

    Abdul, M.; Farooq, U.; Akbar, Jehan; Saif, F.

    2018-06-01

    We transform the semi-classical laser equation for single mode homogeneously broadened lasers to a one-dimensional nonlinear map by using the discrete dynamical approach. The obtained mapping, referred to as laser logistic mapping (LLM), characteristically exhibits convergent, cyclic and chaotic behavior depending on the control parameter. Thus, the so obtained LLM explains stable, bistable, multi-stable, and chaotic solutions for output field intensity. The onset of bistability takes place at a critical value of the effective gain coefficient. The obtained analytical results are confirmed through numerical calculations.

  19. A Study of Carrier Based Aircraft Readiness Sustainability in the Event of External Air Logistic Support Deprivation

    DTIC Science & Technology

    1987-06-01

    equation for 7R investment has been used in combination with the linear equation for COD/VOD delivery to produce six " iso -readiness" lines. Each iso ...readiness line represents the value of inventory investment and days since last COD/VOD delivery required to maintain a specific level of FMC. The iso ...Truett, D.B., Manaaerial Economics, second edition, South-Western Publish..ng Co., 1984, p. 65. 158 Iso -Readiness Plot o80mna4C Mad" of ?R Ve COO/VOo WO

  20. Marketing percolation

    NASA Astrophysics Data System (ADS)

    Goldenberg, J.; Libai, B.; Solomon, S.; Jan, N.; Stauffer, D.

    2000-09-01

    A percolation model is presented, with computer simulations for illustrations, to show how the sales of a new product may penetrate the consumer market. We review the traditional approach in the marketing literature, which is based on differential or difference equations similar to the logistic equation (Bass, Manage. Sci. 15 (1969) 215). This mean-field approach is contrasted with the discrete percolation on a lattice, with simulations of "social percolation" (Solomon et al., Physica A 277 (2000) 239) in two to five dimensions giving power laws instead of exponential growth, and strong fluctuations right at the percolation threshold.

  1. Investigation into calculating tree biomass and carbon in the FIADB using a biomass expansion factor approach

    Treesearch

    Linda S. Heath; Mark Hansen; James E. Smith; Patrick D. Miles

    2009-01-01

    The official U.S. forest carbon inventories (U.S. EPA 2008) have relied on tree biomass estimates that utilize diameter based prediction equations from Jenkins and others (2003), coupled with U.S. Forest Service, Forest Inventory and Analysis (FIA) sample tree measurements and forest area estimates. However, these biomass prediction equations are not the equations used...

  2. Reverberation Modelling Using a Parabolic Equation Method

    DTIC Science & Technology

    2012-10-01

    the limits of their applicability. Results: Transmission loss estimates produced by the PECan parabolic equation acoustic model were used in...environments is possible when used in concert with a parabolic equation passive acoustic model . Future plans: The authors of this report recommend further...technique using other types of acoustic models should be undertaken. Furthermore, as the current method when applied as-is results in estimates that reflect

  3. Verification of the Jenkins and FIA sapling biomass equations for hardwood species in Maine

    Treesearch

    Andrew S. Nelson; Aaron R. Weiskittel; Robert G. Wagner; Michael R. Saunders

    2012-01-01

    In 2009, the Forest Inventory and Analysis Program (FIA) updated its biomass estimation protocols by switching to the component ratio method to estimate biomass of medium and large trees. Additionally, FIA switched from using regional equations to the current FIA aboveground sapling biomass equations that predict woody sapling (2.5 to 12.4 cm d.b.h.) biomass using the...

  4. Calibration of d.b.h.-height equations for southern hardwoods

    Treesearch

    Thomas B. Lynch; A. Gordon Holley; Douglas J. Stevenson

    2006-01-01

    Data from southern hardwood stands in East Texas were used to estimate parameters for d.b.h.-height equations. Mixed model estimation methods were used, so that the stand from which a tree was sampled was considered a random effect. This makes it possible to calibrate these equations using data collected in a local stand of interest, by using d.b.h. and total height...

  5. Relations between nonlinear Riccati equations and other equations in fundamental physics

    NASA Astrophysics Data System (ADS)

    Schuch, Dieter

    2014-10-01

    Many phenomena in the observable macroscopic world obey nonlinear evolution equations while the microscopic world is governed by quantum mechanics, a fundamental theory that is supposedly linear. In order to combine these two worlds in a common formalism, at least one of them must sacrifice one of its dogmas. Linearizing nonlinear dynamics would destroy the fundamental property of this theory, however, it can be shown that quantum mechanics can be reformulated in terms of nonlinear Riccati equations. In a first step, it will be shown that the information about the dynamics of quantum systems with analytical solutions can not only be obtainable from the time-dependent Schrödinger equation but equally-well from a complex Riccati equation. Comparison with supersymmetric quantum mechanics shows that even additional information can be obtained from the nonlinear formulation. Furthermore, the time-independent Schrödinger equation can also be rewritten as a complex Riccati equation for any potential. Extension of the Riccati formulation to include irreversible dissipative effects is straightforward. Via (real and complex) Riccati equations, other fields of physics can also be treated within the same formalism, e.g., statistical thermodynamics, nonlinear dynamical systems like those obeying a logistic equation as well as wave equations in classical optics, Bose- Einstein condensates and cosmological models. Finally, the link to abstract "quantizations" such as the Pythagorean triples and Riccati equations connected with trigonometric and hyperbolic functions will be shown.

  6. Iothalamate versus estimated GFR in a Hispanic-dominant pediatric renal transplant population.

    PubMed

    Papez, Karen E; Barletta, Gina-Marie; Hsieh, Stephanie; Joseph, Mark; Morgenstern, Bruce Z

    2013-12-01

    Accurate knowledge of glomerular filtration rate (GFR) is essential to the practice of nephrology. Routine surveillance of GFR is most commonly executed using estimated GFR (eGFR) calculations, most often from serum creatinine measurements. However, cystatin C-based equations have demonstrated earlier sensitivity to decline in renal function. The literature regarding eGFR from cystatin C has few references that include transplant recipients. Additionally, for most of the published eGFR equations, patients of Hispanic ethnicity have not been enrolled in sufficient numbers. The applicability of several eGFR equations to the pediatric kidney transplant population at our center were compared in the context of determining whether Hispanic ethnicity was associated with equation performance. Updated Schwartz, CKiD, and Zappitelli eGFR estimation equations demonstrated the highest correlations. The authors recommend further prospective investigations to validate and identify factors contributing to these findings.

  7. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  8. Decreased GFR estimated by MDRD or Cockcroft-Gault equation predicts incident CVD: the strong heart study.

    PubMed

    Shara, Nawar M; Resnick, Helaine E; Lu, Li; Xu, Jiaqiong; Vupputuri, Suma; Howard, Barbara V; Umans, Jason G

    2009-01-01

    Kidney function, expressed as glomerular filtration rate (GFR), is commonly estimated from serum creatinine (Scr) and, when decreased, may serve as a nonclassical risk factor for incident cardiovascular disease (CVD). The ability of estimated GFR (eGFR) to predict CVD events during 5-10 years of follow-up is assessed using data from the Strong Heart Study (SHS), a large cohort with a high prevalence of diabetes. eGFRs were calculated with the abbreviated Modification of Diet in Renal Disease study (MDRD) and the Cockcroft-Gault (CG) equations. These estimates were compared in participants with normal and abnormal Scr. The association between eGFR and incident CVD was assessed. More subjects were labeled as having low eGFR (<60 ml/min per 1.73 m2) by the MDRD or CG equation, than by Scr alone. When Scr was in the normal range, both equations labeled similar numbers of participants as having low eGFRs, although concordance between the equations was poor. However, when Scr was elevated, the MDRD equation labeled more subjects as having low eGFR. Persons with low eGFR had increased risk of CVD. The MDRD and CG equations labeled more participants as having decreased GFR than did Scr alone. Decreased eGFR was predictive of CVD in this American Indian population with a high prevalence of obesity and type 2 diabetes mellitus.

  9. Evaluating the generalizability of GEP models for estimating reference evapotranspiration in distant humid and arid locations

    NASA Astrophysics Data System (ADS)

    Kiafar, Hamed; Babazadeh, Hosssien; Marti, Pau; Kisi, Ozgur; Landeras, Gorka; Karimi, Sepideh; Shiri, Jalal

    2017-10-01

    Evapotranspiration estimation is of crucial importance in arid and hyper-arid regions, which suffer from water shortage, increasing dryness and heat. A modeling study is reported here to cross-station assessment between hyper-arid and humid conditions. The derived equations estimate ET0 values based on temperature-, radiation-, and mass transfer-based configurations. Using data from two meteorological stations in a hyper-arid region of Iran and two meteorological stations in a humid region of Spain, different local and cross-station approaches are applied for developing and validating the derived equations. The comparison of the gene expression programming (GEP)-based-derived equations with corresponding empirical-semi empirical ET0 estimation equations reveals the superiority of new formulas in comparison with the corresponding empirical equations. Therefore, the derived models can be successfully applied in these hyper-arid and humid regions as well as similar climatic contexts especially in data-lack situations. The results also show that when relying on proper input configurations, cross-station might be a promising alternative for locally trained models for the stations with data scarcity.

  10. Estimation of premorbid general fluid intelligence using traditional Chinese reading performance in Taiwanese samples.

    PubMed

    Chen, Ying-Jen; Ho, Meng-Yang; Chen, Kwan-Ju; Hsu, Chia-Fen; Ryu, Shan-Jin

    2009-08-01

    The aims of the present study were to (i) investigate if traditional Chinese word reading ability can be used for estimating premorbid general intelligence; and (ii) to provide multiple regression equations for estimating premorbid performance on Raven's Standard Progressive Matrices (RSPM), using age, years of education and Chinese Graded Word Reading Test (CGWRT) scores as predictor variables. Four hundred and twenty-six healthy volunteers (201 male, 225 female), aged 16-93 years (mean +/- SD, 41.92 +/- 18.19 years) undertook the tests individually under supervised conditions. Seventy percent of subjects were randomly allocated to the derivation group (n = 296), and the rest to the validation group (n = 130). RSPM score was positively correlated with CGWRT score and years of education. RSPM and CGWRT scores and years of education were also inversely correlated with age, but the declining trend for RSPM performance against age was steeper than that for CGWRT performance. Separate multiple regression equations were derived for estimating RSPM scores using different combinations of age, years of education, and CGWRT score for both groups. The multiple regression coefficient of each equation ranged from 0.71 to 0.80 with the standard error of estimate between 7 and 8 RSPM points. When fitting the data of one group to the equations derived from its counterpart group, the cross-validation multiple regression coefficients ranged from 0.71 to 0.79. There were no significant differences in the 'predicted-obtained' RSPM discrepancies between any equations. The regression equations derived in the present study may provide a basis for estimating premorbid RSPM performance.

  11. Estimating volume, biomass, and potential emissions of hand-piled fuels

    Treesearch

    Clinton S. Wright; Cameron S. Balog; Jeffrey W. Kelly

    2009-01-01

    Dimensions, volume, and biomass were measured for 121 hand-constructed piles composed primarily of coniferous (n = 63) and shrub/hardwood (n = 58) material at sites in Washington and California. Equations using pile dimensions, shape, and type allow users to accurately estimate the biomass of hand piles. Equations for estimating true pile volume from simple geometric...

  12. Reliability of Summed Item Scores Using Structural Equation Modeling: An Alternative to Coefficient Alpha

    ERIC Educational Resources Information Center

    Green, Samuel B.; Yang, Yanyun

    2009-01-01

    A method is presented for estimating reliability using structural equation modeling (SEM) that allows for nonlinearity between factors and item scores. Assuming the focus is on consistency of summed item scores, this method for estimating reliability is preferred to those based on linear SEM models and to the most commonly reported estimate of…

  13. A Comparison of Height-Accumulation and Volume-Equation Methods for Estimating Tree and Stand Volumes

    Treesearch

    R.B. Ferguson; V. Clark Baldwin

    1995-01-01

    Estimating tree and stand volume in mature plantations is time consuming, involving much manpower and equipment; however, several sampling and volume-prediction techniques are available. This study showed that a well-constructed, volume-equation method yields estimates comparable to those of the often more time-consuming, height-accumulation method, even though the...

  14. An estimator for the relative entropy rate of path measures for stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opper, Manfred, E-mail: manfred.opper@tu-berlin.de

    2017-02-01

    We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.

  15. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  16. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    PubMed

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  17. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  18. Methods for estimating low-flow statistics for Massachusetts streams

    USGS Publications Warehouse

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The streamgaging stations had from 2 to 81 years of record, with a mean record length of 37 years. The low-flow partial-record stations had from 8 to 36 streamflow measurements, with a median of 14 measurements. All basin characteristics were determined from digital map data. The basin characteristics that were statistically significant in most of the final regression equations were drainage area, the area of stratified-drift deposits per unit of stream length plus 0.1, mean basin slope, and an indicator variable that was 0 in the eastern region and 1 in the western region of Massachusetts. The equations were developed by use of weighted-least-squares regression analyses, with weights assigned proportional to the years of record and inversely proportional to the variances of the streamflow statistics for the stations. Standard errors of prediction ranged from 70.7 to 17.5 percent for the equations to predict the 7-day, 10-year low flow and 50-percent duration flow, respectively. The equations are not applicable for use in the Southeast Coastal region of the State, or where basin characteristics for the selected ungaged site are outside the ranges of those for the stations used in the regression analyses. A World Wide Web application was developed that provides streamflow statistics for data collection stations from a data base and for ungaged sites by measuring the necessary basin characteristics for the site and solving the regression equations. Output provided by the Web application for ungaged sites includes a map of the drainage-basin boundary determined for the site, the measured basin characteristics, the estimated streamflow statistics, and 90-percent prediction intervals for the estimates. An equation is provided for combining regression and correlation estimates to obtain improved estimates of the streamflow statistics for low-flow partial-record stations. An equation is also provided for combining regression and drainage-area ratio estimates to obtain improved e

  19. Spatiotemporal chaos of fractional order logistic equation in nonlinear coupled lattices

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Qian; Wang, Xing-Yuan; Liu, Li-Yan; He, Yi; Liu, Jia

    2017-11-01

    We investigate a new spatiotemporal dynamics with fractional order differential logistic map and spatial nonlinear coupling. The spatial nonlinear coupling features such as the higher percentage of lattices in chaotic behaviors for most of parameters and none periodic windows in bifurcation diagrams are held, which are more suitable for encryptions than the former adjacent coupled map lattices. Besides, the proposed model has new features such as the wider parameter range and wider range of state amplitude for ergodicity, which contributes a wider range of key space when applied in encryptions. The simulations and theoretical analyses are developed in this paper.

  20. Are traditional body fat equations and anthropometry valid to estimate body fat in children and adolescents living with HIV?

    PubMed

    Lima, Luiz Rodrigo Augustemak de; Martins, Priscila Custódio; Junior, Carlos Alencar Souza Alves; Castro, João Antônio Chula de; Silva, Diego Augusto Santos; Petroski, Edio Luiz

    The aim of this study was to assess the validity of traditional anthropometric equations and to develop predictive equations of total body and trunk fat for children and adolescents living with HIV based on anthropometric measurements. Forty-eight children and adolescents of both sexes (24 boys) aged 7-17 years, living in Santa Catarina, Brazil, participated in the study. Dual-energy X-ray absorptiometry was used as the reference method to evaluate total body and trunk fat. Height, body weight, circumferences and triceps, subscapular, abdominal and calf skinfolds were measured. The traditional equations of Lohman and Slaughter were used to estimate body fat. Multiple regression models were fitted to predict total body fat (Model 1) and trunk fat (Model 2) using a backward selection procedure. Model 1 had an R 2 =0.85 and a standard error of the estimate of 1.43. Model 2 had an R 2 =0.80 and standard error of the estimate=0.49. The traditional equations of Lohman and Slaughter showed poor performance in estimating body fat in children and adolescents living with HIV. The prediction models using anthropometry provided reliable estimates and can be used by clinicians and healthcare professionals to monitor total body and trunk fat in children and adolescents living with HIV. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  1. The Estimation of Gestational Age at Birth in Database Studies.

    PubMed

    Eberg, Maria; Platt, Robert W; Filion, Kristian B

    2017-11-01

    Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.

  2. Logistic regression of family data from retrospective study designs.

    PubMed

    Whittemore, Alice S; Halpern, Jerry

    2003-11-01

    We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.

  3. Estimating peak discharges, flood volumes, and hydrograph shapes of small ungaged urban streams in Ohio

    USGS Publications Warehouse

    Sherwood, J.M.

    1986-01-01

    Methods are presented for estimating peak discharges, flood volumes and hydrograph shapes of small (less than 5 sq mi) urban streams in Ohio. Examples of how to use the various regression equations and estimating techniques also are presented. Multiple-regression equations were developed for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The significant independent variables affecting peak discharge are drainage area, main-channel slope, average basin-elevation index, and basin-development factor. Standard errors of regression and prediction for the peak discharge equations range from +/-37% to +/-41%. An equation also was developed to estimate the flood volume of a given peak discharge. Peak discharge, drainage area, main-channel slope, and basin-development factor were found to be the significant independent variables affecting flood volumes for given peak discharges. The standard error of regression for the volume equation is +/-52%. A technique is described for estimating the shape of a runoff hydrograph by applying a specific peak discharge and the estimated lagtime to a dimensionless hydrograph. An equation for estimating the lagtime of a basin was developed. Two variables--main-channel length divided by the square root of the main-channel slope and basin-development factor--have a significant effect on basin lagtime. The standard error of regression for the lagtime equation is +/-48%. The data base for the study was established by collecting rainfall-runoff data at 30 basins distributed throughout several metropolitan areas of Ohio. Five to eight years of data were collected at a 5-min record interval. The USGS rainfall-runoff model A634 was calibrated for each site. The calibrated models were used in conjunction with long-term rainfall records to generate a long-term streamflow record for each site. Each annual peak-discharge record was fitted to a Log-Pearson Type III frequency curve. Multiple-regression techniques were then used to analyze the peak discharge data as a function of the basin characteristics of the 30 sites. (Author 's abstract)

  4. Exploration of Logistics Information Technology (IT) Solutions for the Royal Saudi Naval Force Within the Saudi Naval Expansion Program II (SNEP II)

    DTIC Science & Technology

    2017-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT EXPLORATION OF LOGISTICS INFORMATION TECHNOLOGY (IT) SOLUTIONS FOR THE...OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this

  5. Logistic regression for circular data

    NASA Astrophysics Data System (ADS)

    Al-Daffaie, Kadhem; Khan, Shahjahan

    2017-05-01

    This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.

  6. Estimating interaction on an additive scale between continuous determinants in a logistic regression model.

    PubMed

    Knol, Mirjam J; van der Tweel, Ingeborg; Grobbee, Diederick E; Numans, Mattijs E; Geerlings, Mirjam I

    2007-10-01

    To determine the presence of interaction in epidemiologic research, typically a product term is added to the regression model. In linear regression, the regression coefficient of the product term reflects interaction as departure from additivity. However, in logistic regression it refers to interaction as departure from multiplicativity. Rothman has argued that interaction estimated as departure from additivity better reflects biologic interaction. So far, literature on estimating interaction on an additive scale using logistic regression only focused on dichotomous determinants. The objective of the present study was to provide the methods to estimate interaction between continuous determinants and to illustrate these methods with a clinical example. and results From the existing literature we derived the formulas to quantify interaction as departure from additivity between one continuous and one dichotomous determinant and between two continuous determinants using logistic regression. Bootstrapping was used to calculate the corresponding confidence intervals. To illustrate the theory with an empirical example, data from the Utrecht Health Project were used, with age and body mass index as risk factors for elevated diastolic blood pressure. The methods and formulas presented in this article are intended to assist epidemiologists to calculate interaction on an additive scale between two variables on a certain outcome. The proposed methods are included in a spreadsheet which is freely available at: http://www.juliuscenter.nl/additive-interaction.xls.

  7. Integrating biology, field logistics, and simulations to optimize parameter estimation for imperiled species

    USGS Publications Warehouse

    Lanier, Wendy E.; Bailey, Larissa L.; Muths, Erin L.

    2016-01-01

    Conservation of imperiled species often requires knowledge of vital rates and population dynamics. However, these can be difficult to estimate for rare species and small populations. This problem is further exacerbated when individuals are not available for detection during some surveys due to limited access, delaying surveys and creating mismatches between the breeding behavior and survey timing. Here we use simulations to explore the impacts of this issue using four hypothetical boreal toad (Anaxyrus boreas boreas) populations, representing combinations of logistical access (accessible, inaccessible) and breeding behavior (synchronous, asynchronous). We examine the bias and precision of survival and breeding probability estimates generated by survey designs that differ in effort and timing for these populations. Our findings indicate that the logistical access of a site and mismatch between the breeding behavior and survey design can greatly limit the ability to yield accurate and precise estimates of survival and breeding probabilities. Simulations similar to what we have performed can help researchers determine an optimal survey design(s) for their system before initiating sampling efforts.

  8. Logistic regression for dichotomized counts.

    PubMed

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  9. August median streamflow on ungaged streams in Eastern Coastal Maine

    USGS Publications Warehouse

    Lombard, Pamela J.

    2004-01-01

    Methods for estimating August median streamflow were developed for ungaged, unregulated streams in eastern coastal Maine. The methods apply to streams with drainage areas ranging in size from 0.04 to 73.2 square miles and fraction of basin underlain by a sand and gravel aquifer ranging from 0 to 71 percent. The equations were developed with data from three long-term (greater than or equal to 10 years of record) continuous-record streamflow-gaging stations, 23 partial-record streamflow- gaging stations, and 5 short-term (less than 10 years of record) continuous-record streamflow-gaging stations. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record streamflow-gaging stations and short-term continuous-record streamflow-gaging stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term continuous-record streamflow-gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at streamflow-gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for different periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Thirty-one stations were used for the final regression equations. Two basin characteristics?drainage area and fraction of basin underlain by a sand and gravel aquifer?are used in the calculated regression equation to estimate August median streamflow for ungaged streams. The equation has an average standard error of prediction from -27 to 38 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -30 to 43 percent. Model error is larger than sampling error for both equations, indicating that additional or improved estimates of basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow at partial- record or continuous-record gaging stations range from 0.003 to 31.0 cubic feet per second or from 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in eastern coastal Maine, within the range of acceptable explanatory variables, range from 0.003 to 45 cubic feet per second or 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as drainage area and fraction of basin underlain by a sand and gravel aquifer increase.

  10. Going Mobile: An Empirical Model for Explaining Successful Information Logistics in Ward Rounds.

    PubMed

    Esdar, Moritz; Liebe, Jan-David; Babitsch, Birgit; Hübner, Ursula

    2018-01-01

    Medical ward rounds are critical focal points of inpatient care that call for uniquely flexible solutions to provide clinical information at the bedside. While this fact is undoubted, adoption rates of mobile IT solutions remain rather low. Our goal was to investigate if and how mobile IT solutions influence successful information provision at the bedside, i.e. clinical information logistics, as well as to shed light at socio-organizational factors that facilitate adoption rates from a user-centered perspective. Survey data were collected from 373 medical and nursing directors of German, Austrian and Swiss hospitals and analyzed using variance-based Structural Equation Modelling (SEM). The adoption of mobile IT solutions explains large portions of clinical information logistics and is in itself associated with an organizational culture of innovation and end user participation. Results should encourage decision makers to understand mobility as a core constituent of information logistics and thus to promote close end-user participation as well as to work towards building a culture of innovation.

  11. An Evaluation of One- and Three-Parameter Logistic Tailored Testing Procedures for Use with Small Item Pools.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A two-stage study was conducted to compare the ability estimates yielded by tailored testing procedures based on the one-parameter logistic (1PL) and three-parameter logistic (3PL) models. The first stage of the study employed real data, while the second stage employed simulated data. In the first stage, response data for 3,000 examinees were…

  12. Global attractivity of positive periodic solution to periodic Lotka-Volterra competition systems with pure delay

    NASA Astrophysics Data System (ADS)

    Tang, Xianhua; Cao, Daomin; Zou, Xingfu

    We consider a periodic Lotka-Volterra competition system without instantaneous negative feedbacks (i.e., pure-delay systems) x(t)=x(t)[r(t)-∑j=1na(t)x(t-τ(t))], i=1,2,…,n. We establish some 3/2-type criteria for global attractivity of a positive periodic solution of the system, which generalize the well-known Wright's 3/2 criteria for the autonomous delay logistic equation, and thereby, address the open problem proposed by both Kuang [Y. Kuang, Global stability in delayed nonautonomous Lotka-Volterra type systems without saturated equilibria, Differential Integral Equations 9 (1996) 557-567] and Teng [Z. Teng, Nonautonomous Lotka-Volterra systems with delays, J. Differential Equations 179 (2002) 538-561].

  13. An Estimation Theory for Differential Equations and other Problems, with Applications.

    DTIC Science & Technology

    1981-11-01

    order differential -8- operators and M-operators, in particular, the Perron - Frobenius theory and generalizations. Convergence theory for iterative... THEORY FOR DIFFERENTIAL 0EQUATIONS AND OTHER FROBLEMS, WITH APPLICATIONS 0 ,Final Technical Report by Johann Schr6der November, 1981 EUROPEAN RESEARCH...COVERED An estimation theory for differential equations Final Report and other problrms, with app)lications A981 6. PERFORMING ORG. RN,-ORT NUMfFR 7

  14. Use of a Spreadsheet to Help Students Understand the Origin of the Empirical Equation that Allows Estimation of the Extinction Coefficients of Proteins

    ERIC Educational Resources Information Center

    Sims, Paul A.

    2012-01-01

    A brief history of the development of the empirical equation that is used by prominent, Internet-based programs to estimate (or calculate) the extinction coefficients of proteins is presented. In addition, an overview of a series of related assignments designed to help students understand the origin of the empirical equation is provided. The…

  15. Regional equations for estimation of peak-streamflow frequency for natural basins in Texas

    USGS Publications Warehouse

    Asquith, William H.; Slade, Raymond M.

    1997-01-01

    Peak-streamflow frequency for 559 Texas stations with natural (unregulated and rural or nonurbanized) basins was estimated with annual peak-streamflow data through 1993. The peak-streamflow frequency and drainage-basin characteristics for the Texas stations were used to develop 16 sets of equations to estimate peak-streamflow frequency for ungaged natural stream sites in each of 11 regions in Texas. The relation between peak-streamflow frequency and contributing drainage area for 5 of the 11 regions is curvilinear, requiring that one set of equations be developed for drainage areas less than 32 square miles and another set be developed for drainage areas greater than 32 square miles. These equations, developed through multiple-regression analysis using weighted least squares, are based on the relation between peak-streamflow frequency and basin characteristics for streamflow-gaging stations. The regions represent areas with similar flood characteristics. The use and limitations of the regression equations also are discussed. Additionally, procedures are presented to compute the 50-, 67-, and 90-percent confidence limits for any estimation from the equations. Also, supplemental peak-streamflow frequency and basin characteristics for 105 selected stations bordering Texas are included in the report. This supplemental information will aid in interpretation of flood characteristics for sites near the state borders of Texas.

  16. Estimating fire-caused mortality and injury in oak-hickory forests.

    Treesearch

    Robert M. Loomis

    1973-01-01

    Presents equations and graphs for predicting fire-caused tree mortality and equations for estimating basal wound dimensions for surviving trees. The methods apply to black oak, white oak, and some other species of the oak-hickory forest type.

  17. Flood characteristics of urban watersheds in the United States

    USGS Publications Warehouse

    Sauer, Vernon B.; Thomas, W.O.; Stricker, V.A.; Wilson, K.V.

    1983-01-01

    A nationwide study of flood magnitude and frequency in urban areas was made for the purpose of reviewing available literature, compiling an urban flood data base, and developing methods of estimating urban floodflow characteristics in ungaged areas. The literature review contains synopses of 128 recent publications related to urban floodflow. A data base of 269 gaged basins in 56 cities and 31 States, including Hawaii, contains a wide variety of topographic and climatic characteristics, land-use variables, indices of urbanization, and flood-frequency estimates. Three sets of regression equations were developed to estimate flood discharges for ungaged sites for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years. Two sets of regression equations are based on seven independent parameters and the third is based on three independent parameters. The only difference in the two sets of seven-parameter equations is the use of basin lag time in one and lake and reservoir storage in the other. Of primary importance in these equations is an independent estimate of the equivalent rural discharge for the ungaged basin. The equations adjust the equivalent rural discharge to an urban condition. The primary adjustment factor, or index of urbanization, is the basin development factor, a measure of the extent of development of the drainage system in the basin. This measure includes evaluations of storm drains (sewers), channel improvements, and curb-and-gutter streets. The basin development factor is statistically very significant and offers a simple and effective way of accounting for drainage development and runoff response in urban areas. Percentage of impervious area is also included in the seven-parameter equations as an additional measure of urbanization and apparently accounts for increased runoff volumes. This factor is not highly significant for large floods, which supports the generally held concept that imperviousness is not a dominant factor when soils become more saturated during large storms. Other parameters in the seven-parameter equations include drainage area size, channel slope, rainfall intensity, lake and reservoir storage, and basin lag time. These factors are all statistically significant and provide logical indices of basin conditions. The three-parameter equations include only the three most significant parameters: rural discharge, basin-development factor, and drainage area size. All three sets of regression equations provide unbiased estimates of urban flood frequency. The seven-parameter regression equations without basin lag time have average standard errors of regression varying from ? 37 percent for the 5-year flood to ? 44 percent for the 100-year flood and ? 49 percent for the 500-year flood. The other two sets of regression equations have similar accuracy. Several tests for bias, sensitivity, and hydrologic consistency are included which support the conclusion that the equations are useful throughout the United States. All estimating equations were developed from data collected on drainage basins where temporary in-channel storage, due to highway embankments, was not significant. Consequently, estimates made with these equations do not account for the reducing effect of this temporary detention storage.

  18. "You Should Have Seen the Look on Your Face…": Self-awareness of Facial Expressions.

    PubMed

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals' awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities.

  19. “You Should Have Seen the Look on Your Face…”: Self-awareness of Facial Expressions

    PubMed Central

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals’ awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities. PMID:28611703

  20. Correlates of Cervical Cancer Screening Among Adult Latino Women: A 5-Year Follow-Up.

    PubMed

    Rojas, Patria; Li, Tan; Ravelo, Gira J; Dawson, Christyl; Sanchez, Mariana; Sneij, Alicia; Wang, Weize; Kanamori, Mariano; Cyrus, Elena; De La Rosa, Mario R

    2017-06-01

    Latinas have the highest incidence rates of cervical cancer in the United States, and Latinas in the United States are less likely to utilize cervical cancer screening. We used secondary data analysis of a non-clinical convenience sample (n=316 women at baseline; n=285 at five-year follow-up) to examine correlates of cervical cancer screening among adult Latina women. Univariate and multiple logistic regression models using Generalized Estimated Equations (GEE) algorithm were utilized to assess the influence of the independent variables. Women who reported their main healthcare source as community health clinics, women who were sexually active, and women who reported that a healthcare provider discussed HIV prevention with them were more likely to report having a cervical cancer screening (aOR=2.06; CI=1.20, 3.52). The results suggest a need for continued efforts to ensure that medically underserved women (e.g., Latina women) receive counseling and education about the importance of preventive cancer screening.

  1. Prenatal Exposure to Maternal and Paternal Smoking on Attention Deficit Hyperactivity Disorders Symptoms and Diagnosis in Offspring

    PubMed Central

    Nomura, Yoko; Marks, David J.; Halperin, Jeffrey M.

    2011-01-01

    The study examined the effect of maternal and paternal smoking during pregnancy on the child’s inattention and hyperactivity/impulsivity symptoms, and the risk for attention deficit hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD). Generalized estimating equations, incorporating data from multiple informants (parents and teachers), was used to evaluate levels of ADHD as a function of parental smoking. The risk for ADHD, ODD, and comorbid ADHD and ODD was evaluated using polytomous logistic regression. We found that maternal, but not paternal, smoking was significantly associated with elevated inattention, hyperactivity/impulsivity, and total ADHD symptoms in children. Children of smoking, relative to nonsmoking, mothers had a significant increased risk for comorbid ADHD and ODD and ADHD, but not ODD. Although father’s smoking was not associated with an increased risk, as it strongly influenced mothers’ smoking, intervention for both parents may be most effective in preventing the pathway to ADHD-related problems in the children. PMID:20823730

  2. Determinants of performance failure in the nursing home industry☆

    PubMed Central

    Zinn, Jacqueline; Mor, Vincent; Feng, Zhanlian; Intrator, Orna

    2013-01-01

    This study investigates the determinants of performance failure in U.S. nursing homes. The sample consisted of 91,168 surveys from 10,901 facilities included in the Online Survey Certification and Reporting system from 1996 to 2005. Failed performance was defined as termination from the Medicare and Medicaid programs. Determinants of performance failure were identified as core structural change (ownership change), peripheral change (related diversification), prior financial and quality of care performance, size and environmental shock (Medicaid case mix reimbursement and prospective payment system introduction). Additional control variables that could contribute to the likelihood of performance failure were included in a cross-sectional time series generalized estimating equation logistic regression model. Our results support the contention, derived from structural inertia theory, that where in an organization’s structure change occurs determines whether it is adaptive or disruptive. In addition, while poor prior financial and quality performance and the introduction of case mix reimbursement increases the risk of failure, larger size is protective, decreasing the likelihood of performance failure. PMID:19128865

  3. Alcohol expectancies and inhibition conflict as moderators of the alcohol-unprotected sex relationship: Event-level findings from a daily diary study among individuals living with HIV in Cape Town, South Africa

    PubMed Central

    Kiene, Susan M.; Simbayi, Leickness C.; Abrams, Amber; Cloete, Allanise

    2015-01-01

    Literature from sub-Saharan Africa and elsewhere supports a global association between alcohol and HIV risk. However, more rigorous studies using multiple event-level methods find mixed support for this association, suggesting the importance of examining potential moderators of this relationship. The present study explores the assumptions of alcohol expectancy theory and alcohol myopia theory as possible moderators that help elucidate the circumstances under which alcohol may affect individuals’ ability to use a condom. Participants were 82 individuals (58 women, 24 men) living with HIV who completed daily phone interviews for 42 days which assessed daily sexual behavior and alcohol consumption. Logistic generalized estimating equation models were used to examine the potential moderating effects of inhibition conflict and sex-related alcohol outcome expectancies. The data provided some support for both theories and in some cases the moderation effects were stronger when both partners consumed alcohol. PMID:26280530

  4. Individual- and Structural-Level Risk Factors for Suicide Attempts Among Transgender Adults.

    PubMed

    Perez-Brumer, Amaya; Hatzenbuehler, Mark L; Oldenburg, Catherine E; Bockting, Walter

    2015-01-01

    This study assessed individual (ie, internalized transphobia) and structural forms of stigma as risk factors for suicide attempts among transgender adults. Internalized transphobia was assessed through a 26-item scale including four dimensions: pride, passing, alienation, and shame. State-level structural stigma was operationalized as a composite index, including density of same-sex couples; proportion of Gay-Straight Alliances per public high school; 5 policies related to sexual orientation discrimination; and aggregated public opinion toward homosexuality. Multivariable logistic generalized estimating equation models assessed associations of interest among an online sample of transgender adults (N = 1,229) representing 48 states and the District of Columbia. Lower levels of structural stigma were associated with fewer lifetime suicide attempts (AOR 0.96, 95% CI 0.92-0.997), and a higher score on the internalized transphobia scale was associated with greater lifetime suicide attempts (AOR 1.18, 95% CI 1.04-1.33). Addressing stigma at multiple levels is necessary to reduce the vulnerability of suicide attempts among transgender adults.

  5. Determinants of performance failure in the nursing home industry.

    PubMed

    Zinn, Jacqueline; Mor, Vincent; Feng, Zhanlian; Intrator, Orna

    2009-03-01

    This study investigates the determinants of performance failure in U.S. nursing homes. The sample consisted of 91,168 surveys from 10,901 facilities included in the Online Survey Certification and Reporting system from 1996 to 2005. Failed performance was defined as termination from the Medicare and Medicaid programs. Determinants of performance failure were identified as core structural change (ownership change), peripheral change (related diversification), prior financial and quality of care performance, size and environmental shock (Medicaid case mix reimbursement and prospective payment system introduction). Additional control variables that could contribute to the likelihood of performance failure were included in a cross-sectional time series generalized estimating equation logistic regression model. Our results support the contention, derived from structural inertia theory, that where in an organization's structure change occurs determines whether it is adaptive or disruptive. In addition, while poor prior financial and quality performance and the introduction of case mix reimbursement increases the risk of failure, larger size is protective, decreasing the likelihood of performance failure.

  6. Project FIT: A School, Community and Social Marketing Intervention Improves Healthy Eating Among Low-Income Elementary School Children.

    PubMed

    Alaimo, Katherine; Carlson, Joseph J; Pfeiffer, Karin A; Eisenmann, Joey C; Paek, Hye-Jin; Betz, Heather H; Thompson, Tracy; Wen, Yalu; Norman, Gregory J

    2015-08-01

    Project FIT was a two-year multi-component nutrition and physical activity intervention delivered in ethnically-diverse low-income elementary schools in Grand Rapids, MI. This paper reports effects on children's nutrition outcomes and process evaluation of the school component. A quasi-experimental design was utilized. 3rd, 4th and 5th-grade students (Yr 1 baseline: N = 410; Yr 2 baseline: N = 405; age range: 7.5-12.6 years) were measured in the fall and spring over the two-year intervention. Ordinal logistic, mixed effect models and generalized estimating equations were fitted, and the robust standard errors were utilized. Primary outcomes favoring the intervention students were found regarding consumption of fruits, vegetables and whole grain bread during year 2. Process evaluation revealed that implementation of most intervention components increased during year 2. Project FIT resulted in small but beneficial effects on consumption of fruits, vegetables, and whole grain bread in ethnically diverse low-income elementary school children.

  7. Alcohol Expectancies and Inhibition Conflict as Moderators of the Alcohol-Unprotected Sex Relationship: Event-Level Findings from a Daily Diary Study Among Individuals Living with HIV in Cape Town, South Africa.

    PubMed

    Kiene, Susan M; Simbayi, Leickness C; Abrams, Amber; Cloete, Allanise

    2016-01-01

    Literature from sub-Saharan Africa and elsewhere supports a global association between alcohol and HIV risk. However, more rigorous studies using multiple event-level methods find mixed support for this association, suggesting the importance of examining potential moderators of this relationship. The present study explores the assumptions of alcohol expectancy theory and alcohol myopia theory as possible moderators that help elucidate the circumstances under which alcohol may affect individuals' ability to use a condom. Participants were 82 individuals (58 women, 24 men) living with HIV who completed daily phone interviews for 42 days which assessed daily sexual behavior and alcohol consumption. Logistic generalized estimating equation models were used to examine the potential moderating effects of inhibition conflict and sex-related alcohol outcome expectancies. The data provided some support for both theories and in some cases the moderation effects were stronger when both partners consumed alcohol.

  8. Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM

    ERIC Educational Resources Information Center

    Warner, Rebecca M.

    2007-01-01

    This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…

  9. Peak groundwater depletion in the High Plains Aquifer, projections from 1930 to 2110

    USDA-ARS?s Scientific Manuscript database

    Peak groundwater depletion from overtapping aquifers beyond recharge rates occurs as the depletion rate increases until a peak occurs followed by a decreasing trend as pumping equilibrates towards available recharge. The logistic equation of Hubbert’s study of peak oil is used to project measurement...

  10. Correlation of cystatin C and creatinine based estimates of renal function in children with hydronephrosis.

    PubMed

    Momtaz, Hossein-Emad; Dehghan, Arash; Karimian, Mohammad

    2016-01-01

    The use of a simple and accurate glomerular filtration rate (GFR) estimating method aiming minute assessment of renal function can be of great clinical importance. This study aimed to determine the association of a GFR estimating by equation that includes only cystatin C (Gentian equation) to equation that include only creatinine (Schwartz equation) among children. A total of 31 children aged from 1 day to 5 years with the final diagnosis of unilateral or bilateral hydronephrosis referred to Besat hospital in Hamadan, between March 2010 and February 2011 were consecutively enrolled. Schwartz and Gentian equations were employed to determine GFR based on plasma creatinine and cystatin C levels, respectively. The proportion of GFR based on Schwartz equation was 70.19± 24.86 ml/min/1.73 m(2), while the level of this parameter based on Gentian method and using cystatin C was 86.97 ± 21.57 ml/min/1.73 m(2). The Pearson correlation coefficient analysis showed a strong direct association between the two levels of GFR measured by Schwartz equation based on serum creatinine level and Gentian method and using cystatin C (r = 0.594, P < 0.001). The linear association between GFR values measured with the two methods included cystatin C based GFR = 50.8+ 0.515 × Schwartz GFR. The correlation between GFR values measured by using serum creatinine and serum cystatin C measurements remained meaningful even after adjustment for patients' gender and age (r = 0.724, P < 0.001). The equation developed based on cystatin C level is comparable with another equation, based on serum creatinine (Schwartz formula) to estimate GFR in children.

  11. Generation of a new cystatin C-based estimating equation for glomerular filtration rate by use of 7 assays standardized to the international calibrator.

    PubMed

    Grubb, Anders; Horio, Masaru; Hansson, Lars-Olof; Björk, Jonas; Nyman, Ulf; Flodin, Mats; Larsson, Anders; Bökenkamp, Arend; Yasuda, Yoshinari; Blufpand, Hester; Lindström, Veronica; Zegers, Ingrid; Althaus, Harald; Blirup-Jensen, Søren; Itoh, Yoshi; Sjöström, Per; Nordin, Gunnar; Christensson, Anders; Klima, Horst; Sunde, Kathrin; Hjort-Christensen, Per; Armbruster, David; Ferrero, Carlo

    2014-07-01

    Many different cystatin C-based equations exist for estimating glomerular filtration rate. Major reasons for this are the previous lack of an international cystatin C calibrator and the nonequivalence of results from different cystatin C assays. Use of the recently introduced certified reference material, ERM-DA471/IFCC, and further work to achieve high agreement and equivalence of 7 commercially available cystatin C assays allowed a substantial decrease of the CV of the assays, as defined by their performance in an external quality assessment for clinical laboratory investigations. By use of 2 of these assays and a population of 4690 subjects, with large subpopulations of children and Asian and Caucasian adults, with their GFR determined by either renal or plasma inulin clearance or plasma iohexol clearance, we attempted to produce a virtually assay-independent simple cystatin C-based equation for estimation of GFR. We developed a simple cystatin C-based equation for estimation of GFR comprising only 2 variables, cystatin C concentration and age. No terms for race and sex are required for optimal diagnostic performance. The equation, [Formula: see text] is also biologically oriented, with 1 term for the theoretical renal clearance of small molecules and 1 constant for extrarenal clearance of cystatin C. A virtually assay-independent simple cystatin C-based and biologically oriented equation for estimation of GFR, without terms for sex and race, was produced. © 2014 The American Association for Clinical Chemistry.

  12. Comparison of the prevalence and mortality risk of CKD in Australia using the CKD Epidemiology Collaboration (CKD-EPI) and Modification of Diet in Renal Disease (MDRD) Study GFR estimating equations: the AusDiab (Australian Diabetes, Obesity and Lifestyle) Study.

    PubMed

    White, Sarah L; Polkinghorne, Kevan R; Atkins, Robert C; Chadban, Steven J

    2010-04-01

    The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) is more accurate than the Modification of Diet in Renal Disease (MDRD) Study equation. We applied both equations in a cohort representative of the Australian adult population. Population-based cohort study. 11,247 randomly selected noninstitutionalized Australians aged >or= 25 years who attended a physical examination during the baseline AusDiab (Australian Diabetes, Obesity and Lifestyle) Study survey. Glomerular filtration rate (GFR) was estimated using the MDRD Study and CKD-EPI equations. Kidney damage was defined as urine albumin-creatinine ratio >or= 2.5 mg/mmol in men and >or= 3.5 mg/mmol in women or urine protein-creatinine ratio >or= 0.20 mg/mg. Chronic kidney disease (CKD) was defined as estimated GFR (eGFR) >or= 60 mL/min/1.73 m(2) or kidney damage. Participants were classified into 3 mutually exclusive subgroups: CKD according to both equations; CKD according to the MDRD Study equation, but no CKD according to the CKD-EPI equation; and no CKD according to both equations. All-cause mortality was examined in subgroups with and without CKD. Serum creatinine and urinary albumin, protein, and creatinine measured on a random spot morning urine sample. 266 participants identified as having CKD according to the MDRD Study equation were reclassified to no CKD according to the CKD-EPI equation (estimated prevalence, 1.9%; 95% CI, 1.4-2.6). All had an eGFR >or= 45 mL/min/1.73 m(2) using the MDRD Study equation. Reclassified individuals were predominantly women with a favorable cardiovascular risk profile. The proportion of reclassified individuals with a Framingham-predicted 10-year cardiovascular risk >or= 30% was 7.2% compared with 7.9% of the group with no CKD according to both equations and 45.3% of individuals retained in stage 3a using both equations. There was no evidence of increased all-cause mortality in the reclassified group (age- and sex-adjusted hazard ratio vs no CKD, 1.01; 95% CI, 0.62-1.97). Using the MDRD Study equation, the prevalence of CKD in the Australian population aged >or= 25 years was 13.4% (95% CI, 11.1-16.1). Using the CKD-EPI equation, the prevalence was 11.5% (95% CI, 9.42-14.1). Single measurements of serum creatinine and urinary markers. The lower estimated prevalence of CKD using the CKD-EPI equation is caused by reclassification of low-risk individuals. Copyright 2010 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  13. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  14. Flexible Approaches to Computing Mediated Effects in Generalized Linear Models: Generalized Estimating Equations and Bootstrapping

    ERIC Educational Resources Information Center

    Schluchter, Mark D.

    2008-01-01

    In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…

  15. A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narlesky, Joshua Edward; Kelly, Elizabeth J.

    2015-09-10

    This report documents the new PG calibration regression equation. These calibration equations incorporate new data that have become available since revision 1 of “A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis” was issued [3] The calibration equations are based on a weighted least squares (WLS) approach for the regression. The WLS method gives each data point its proper amount of influence over the parameter estimates. This gives two big advantages, more precise parameter estimates and better and more defensible estimates of uncertainties. The WLS approach makes sense both statistically and experimentally because themore » variances increase with concentration, and there are physical reasons that the higher measurements are less reliable and should be less influential. The new magnesium calibration includes a correction for sodium and separate calibration equation for items with and without chlorine. These additional calibration equations allow for better predictions and smaller uncertainties for sodium in materials with and without chlorine. Chlorine and sodium have separate equations for RICH materials. Again, these equations give better predictions and smaller uncertainties chlorine and sodium for RICH materials.« less

  16. Asymptotic proportionality (weak ergodicity) and conditional asymptotic equality of solutions to time-heterogeneous sublinear difference and differential equations

    NASA Astrophysics Data System (ADS)

    Thieme, Horst R.

    The concept of asymptotic proportionality and conditional asymptotic equality which is presented here aims at making global asymptotic stability statements for time-heterogeneous difference and differential equations. For such non-autonomous problems (apart from special cases) no prominent special solutions (equilibra, periodic solutions) exist which are natural candidates for the asymptotic behaviour of arbitrary solutions. One way out of this dilemma consists in looking for conditions under which any two solutions to the problem (with different initial conditions) behave in a similar or even the same way as time tends to infinity. We study a general sublinear difference equation in an ordered Banach space and, for illustration, time-heterogeneous versions of several well-known differential equations modelling the spread of gonorrhea in a heterogeneous population, the spread of a vector-borne infectious disease, and the dynamics of a logistically growing spatially diffusing population.

  17. The allometric relationship between resting metabolic rate and body mass in wild waterfowl (Anatidae) and an application to estimation of winter habitat requirements

    USGS Publications Warehouse

    Miller, M.R.; Eadie, J. McA

    2006-01-01

    We examined the allometric relationship between resting metabolic rate (RMR; kJ day-1) and body mass (kg) in wild waterfowl (Anatidae) by regressing RMR on body mass using species means from data obtained from published literature (18 sources, 54 measurements, 24 species; all data from captive birds). There was no significant difference among measurements from the rest (night; n = 37), active (day; n = 14), and unspecified (n = 3) phases of the daily cycle (P > 0.10), and we pooled these measurements for analysis. The resulting power function (aMassb) for all waterfowl (swans, geese, and ducks) had an exponent (b; slope of the regression) of 0.74, indistinguishable from that determined with commonly used general equations for nonpasserine birds (0.72-0.73). In contrast, the mass proportionality coefficient (b; y-intercept at mass = 1 kg) of 422 exceeded that obtained from the nonpasserine equations by 29%-37%. Analyses using independent contrasts correcting for phylogeny did not substantially alter the equation. Our results suggest the waterfowl equation provides a more appropriate estimate of RMR for bioenergetics analyses of waterfowl than do the general nonpasserine equations. When adjusted with a multiple to account for energy costs of free living, the waterfowl equation better estimates daily energy expenditure. Using this equation, we estimated that the extent of wetland habitat required to support wintering waterfowl populations could be 37%-50% higher than previously predicted using general nonpasserine equations. ?? The Cooper Ornithological Society 2006.

  18. Validity of bioelectrical impedance measurement in predicting fat-free mass of Chinese children and adolescents.

    PubMed

    Wang, Lin; Hui, Stanley Sai-chuen; Wong, Stephen Heung-sang

    2014-11-15

    The current study aimed to examine the validity of various published bioelectrical impedance analysis (BIA) equations in estimating FFM among Chinese children and adolescents and to develop BIA equations for the estimation of fat-free mass (FFM) appropriate for Chinese children and adolescents. A total of 255 healthy Chinese children and adolescents aged 9 to 19 years old (127 males and 128 females) from Tianjin, China, participated in the BIA measurement at 50 kHz between the hand and the foot. The criterion measure of FFM was also employed using dual-energy X-ray absorptiometry (DEXA). FFM estimated from 24 published BIA equations was cross-validated against the criterion measure from DEXA. Multiple linear regression was conducted to examine alternative BIA equation for the studied population. FFM estimated from the 24 published BIA equations yielded high correlations with the directly measured FFM from DEXA. However, none of the 24 equations was statistically equivalent with the DEXA-measured FFM. Using multiple linear regression and cross-validation against DEXA measurement, an alternative prediction equation was determined as follows: FFM (kg)=1.613+0.742×height (cm)2/impedance (Ω)+0.151×body weight (kg); R2=0.95; SEE=2.45 kg; CV=6.5, 93.7% of the residuals of all the participants fell within the 95% limits of agreement. BIA was highly correlated with FFM in Chinese children and adolescents. When the new developed BIA equations are applied, BIA can provide a practical and valid measurement of body composition in Chinese children and adolescents.

  19. Validity of Bioelectrical Impedance Measurement in Predicting Fat-Free Mass of Chinese Children and Adolescents

    PubMed Central

    Wang, Lin; Hui, Stanley Sai-chuen; Wong, Stephen Heung-sang

    2014-01-01

    Background The current study aimed to examine the validity of various published bioelectrical impedance analysis (BIA) equations in estimating FFM among Chinese children and adolescents and to develop BIA equations for the estimation of fat-free mass (FFM) appropriate for Chinese children and adolescents. Material/Methods A total of 255 healthy Chinese children and adolescents aged 9 to 19 years old (127 males and 128 females) from Tianjin, China, participated in the BIA measurement at 50 kHz between the hand and the foot. The criterion measure of FFM was also employed using dual-energy X-ray absorptiometry (DEXA). FFM estimated from 24 published BIA equations was cross-validated against the criterion measure from DEXA. Multiple linear regression was conducted to examine alternative BIA equation for the studied population. Results FFM estimated from the 24 published BIA equations yielded high correlations with the directly measured FFM from DEXA. However, none of the 24 equations was statistically equivalent with the DEXA-measured FFM. Using multiple linear regression and cross-validation against DEXA measurement, an alternative prediction equation was determined as follows: FFM (kg)=1.613+0.742×height (cm)2/impedance (Ω)+0.151×body weight (kg); R2=0.95; SEE=2.45kg; CV=6.5, 93.7% of the residuals of all the participants fell within the 95% limits of agreement. Conclusions BIA was highly correlated with FFM in Chinese children and adolescents. When the new developed BIA equations are applied, BIA can provide a practical and valid measurement of body composition in Chinese children and adolescents. PMID:25398209

  20. Semi-empirical estimation of organic compound fugacity ratios at environmentally relevant system temperatures.

    PubMed

    van Noort, Paul C M

    2009-06-01

    Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.

Top