Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study
USDA-ARS?s Scientific Manuscript database
Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...
First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity
NASA Technical Reports Server (NTRS)
Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.
Weight estimation techniques for composite airplanes in general aviation industry
NASA Technical Reports Server (NTRS)
Paramasivam, T.; Horn, W. J.; Ritter, J.
1986-01-01
Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.
ERIC Educational Resources Information Center
Schluchter, Mark D.
2008-01-01
In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…
David. C. Chojnacky
2012-01-01
An update of the Jenkins et al. (2003) biomass estimation equations for North American tree species resulted in 35 generalized equations developed from published equations. These 35 equations, which predict aboveground biomass of individual species grouped according to a taxa classification (based on genus or family and sometimes specific gravity), generally predicted...
Dynamical behavior for the three-dimensional generalized Hasegawa-Mima equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Ruifeng; Guo Boling; Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088
2007-01-15
The long time behavior of solution of the three-dimensional generalized Hasegawa-Mima [Phys. Fluids 21, 87 (1978)] equations with dissipation term is considered. The global attractor problem of the three-dimensional generalized Hasegawa-Mima equations with periodic boundary condition was studied. Applying the method of uniform a priori estimates, the existence of global attractor of this problem was proven, and also the dimensions of the global attractor are estimated.
GFR Estimation: From Physiology to Public Health
Levey, Andrew S.; Inker, Lesley A.; Coresh, Josef
2014-01-01
Estimating glomerular filtration rate (GFR) is essential for clinical practice, research, and public health. Appropriate interpretation of estimated GFR (eGFR) requires understanding the principles of physiology, laboratory medicine, epidemiology and biostatistics used in the development and validation of GFR estimating equations. Equations developed in diverse populations are less biased at higher GFR than equations developed in CKD populations and are more appropriate for general use. Equations that include multiple endogenous filtration markers are more precise than equations including a single filtration marker. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations are the most accurate GFR estimating equations that have been evaluated in large, diverse populations and are applicable for general clinical use. The 2009 CKD-EPI creatinine equation is more accurate in estimating GFR and prognosis than the 2006 Modification of Diet in Renal Disease (MDRD) Study equation and provides lower estimates of prevalence of decreased eGFR. It is useful as a “first” test for decreased eGFR and should replace the MDRD Study equation for routine reporting of serum creatinine–based eGFR by clinical laboratories. The 2012 CKD-EPI cystatin C equation is as accurate as the 2009 CKD-EPI creatinine equation in estimating eGFR, does not require specification of race, and may be more accurate in patients with decreased muscle mass. The 2012 CKD-EPI creatinine–cystatin C equation is more accurate than the 2009 CKD-EPI creatinine and 2012 CKD-EPI cystatin C equations and is useful as a confirmatory test for decreased eGFR as determined by an equation based on serum creatinine. Further improvement in GFR estimating equations will require development in more broadly representative populations, including diverse racial and ethnic groups, use of multiple filtration markers, and evaluation using statistical techniques to compare eGFR to “true GFR”. PMID:24485147
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
A General Linear Method for Equating with Small Samples
ERIC Educational Resources Information Center
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
The Estimation of Gestational Age at Birth in Database Studies.
Eberg, Maria; Platt, Robert W; Filion, Kristian B
2017-11-01
Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.
UNIFORM ESTIMATES FOR SOLUTIONS OF THE \\overline{\\partial}-EQUATION IN PSEUDOCONVEX POLYHEDRA
NASA Astrophysics Data System (ADS)
Sergeev, A. G.; Henkin, G. M.
1981-04-01
It is proved that the nonhomogeneous Cauchy-Riemann equation on an analytic submanifold "in general position" in a Cartesian product of strictly convex domains admits a solution with a uniform estimate. The possibility of weakening the requirement of general position in this result is investigated. Bibliography: 46 titles.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
NASA Astrophysics Data System (ADS)
Finster, Felix; Smoller, Joel
2010-09-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
ERIC Educational Resources Information Center
Bollen, Kenneth A.; Maydeu-Olivares, Albert
2007-01-01
This paper presents a new polychoric instrumental variable (PIV) estimator to use in structural equation models (SEMs) with categorical observed variables. The PIV estimator is a generalization of Bollen's (Psychometrika 61:109-121, 1996) 2SLS/IV estimator for continuous variables to categorical endogenous variables. We derive the PIV estimator…
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
Regularity estimates up to the boundary for elliptic systems of difference equations
NASA Technical Reports Server (NTRS)
Strikwerda, J. C.; Wade, B. A.; Bube, K. P.
1986-01-01
Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.
Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders
2017-09-01
Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.
Miller, M.R.; Eadie, J. McA
2006-01-01
We examined the allometric relationship between resting metabolic rate (RMR; kJ day-1) and body mass (kg) in wild waterfowl (Anatidae) by regressing RMR on body mass using species means from data obtained from published literature (18 sources, 54 measurements, 24 species; all data from captive birds). There was no significant difference among measurements from the rest (night; n = 37), active (day; n = 14), and unspecified (n = 3) phases of the daily cycle (P > 0.10), and we pooled these measurements for analysis. The resulting power function (aMassb) for all waterfowl (swans, geese, and ducks) had an exponent (b; slope of the regression) of 0.74, indistinguishable from that determined with commonly used general equations for nonpasserine birds (0.72-0.73). In contrast, the mass proportionality coefficient (b; y-intercept at mass = 1 kg) of 422 exceeded that obtained from the nonpasserine equations by 29%-37%. Analyses using independent contrasts correcting for phylogeny did not substantially alter the equation. Our results suggest the waterfowl equation provides a more appropriate estimate of RMR for bioenergetics analyses of waterfowl than do the general nonpasserine equations. When adjusted with a multiple to account for energy costs of free living, the waterfowl equation better estimates daily energy expenditure. Using this equation, we estimated that the extent of wetland habitat required to support wintering waterfowl populations could be 37%-50% higher than previously predicted using general nonpasserine equations. ?? The Cooper Ornithological Society 2006.
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
NASA Astrophysics Data System (ADS)
Yu, Jie; Liu, Yikan; Yamamoto, Masahiro
2018-04-01
In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.
Noumegni, Steve Raoul; Ama, Vicky Jocelyne Moor; Assah, Felix K; Bigna, Jean Joel; Nansseu, Jobert Richie; Kameni, Jenny Arielle M; Katte, Jean-Claude; Dehayem, Mesmin Y; Kengne, Andre Pascal; Sobngwi, Eugene
2017-01-01
The Absolute cardiovascular disease (CVD) risk evaluation using multivariable CVD risk models is increasingly advocated in people with HIV, in whom existing models remain largely untested. We assessed the agreement between the general population derived Framingham CVD risk equation and the HIV-specific Data collection on Adverse effects of anti-HIV Drugs (DAD) CVD risk equation in HIV-infected adult Cameroonians. This cross-sectional study involved 452 HIV infected adults recruited at the HIV day-care unit of the Yaoundé Central Hospital, Cameroon. The 5-year projected CVD risk was estimated for each participant using the DAD and Framingham CVD risk equations. Agreement between estimates from these equations was assessed using the spearman correlation and Cohen's kappa coefficient. The mean age of participants (80% females) was 44.4 ± 9.8 years. Most participants (88.5%) were on antiretroviral treatment with 93.3% of them receiving first-line regimen. The most frequent cardiovascular risk factors were abdominal obesity (43.1%) and dyslipidemia (33.8%). The median estimated 5-year CVD risk was 0.6% (25th-75th percentiles: 0.3-1.3) using the DAD equation and 0.7% (0.2-2.0) with the Framingham equation. The Spearman correlation between the two estimates was 0.93 ( p < 0.001). The kappa statistic was 0.61 (95% confident interval: 0.54-0.67) for the agreement between the two equations in classifying participants across risk categories defined as low, moderate, high and very high. Most participants had a low-to-moderate estimated CVD risk, with acceptable level of agreement between the general and HIV-specific equations in ranking CVD risk.
Generalized approach to cooling charge-coupled devices using thermoelectric coolers
NASA Technical Reports Server (NTRS)
Petrick, S. Walter
1987-01-01
This paper is concerned with the use of thermoelectric coolers (TECs) to cool charge-coupled devices (CCDs). Heat inputs to the CCD from the warmer environment are identified, and generalized graphs are used to approximate the major heat inputs. A method of choosing and estimating the power consumption of the TEC is discussed. This method includes the use of TEC performance information supplied by the manufacturer and equations derived from this information. Parameters of the equations are tabulated to enable the reader to use the TEC performance equations for choosing and estimating the power needed for specific TEC applications.
Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis.
ERIC Educational Resources Information Center
Wall, Melanie M.; Amemiya, Yasuo
2001-01-01
Considers the estimation of polynomial structural models and shows a limitation of an existing method. Introduces a new procedure, the generalized appended product indicator procedure, for nonlinear structural equation analysis. Addresses statistical issues associated with the procedure through simulation. (SLD)
Kato Smoothing and Strichartz Estimates for Wave Equations with Magnetic Potentials
NASA Astrophysics Data System (ADS)
D'Ancona, Piero
2015-04-01
Let H be a selfadjoint operator and A a closed operator on a Hilbert space . If A is H-(super)smooth in the sense of Kato-Yajima, we prove that is -(super)smooth. This allows us to include wave and Klein-Gordon equations in the abstract theory at the same level of generality as Schrödinger equations. We give a few applications and in particular, based on the resolvent estimates of Erdogan, Goldberg and Schlag (Forum Mathematicum 21:687-722, 2009), we prove Strichartz estimates for wave equations perturbed with large magnetic potentials on , n ≥ 3.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
Alexander, Terry W.; Wilson, Gary L.
1995-01-01
A generalized least-squares regression technique was used to relate the 2- to 500-year flood discharges from 278 selected streamflow-gaging stations to statistically significant basin characteristics. The regression relations (estimating equations) were defined for three hydrologic regions (I, II, and III) in rural Missouri. Ordinary least-squares regression analyses indicate that drainage area (Regions I, II, and III) and main-channel slope (Regions I and II) are the only basin characteristics needed for computing the 2- to 500-year design-flood discharges at gaged or ungaged stream locations. The resulting generalized least-squares regression equations provide a technique for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood discharges on unregulated streams in rural Missouri. The regression equations for Regions I and II were developed from stream-flow-gaging stations with drainage areas ranging from 0.13 to 11,500 square miles and 0.13 to 14,000 square miles, and main-channel slopes ranging from 1.35 to 150 feet per mile and 1.20 to 279 feet per mile. The regression equations for Region III were developed from streamflow-gaging stations with drainage areas ranging from 0.48 to 1,040 square miles. Standard errors of estimate for the generalized least-squares regression equations in Regions I, II, and m ranged from 30 to 49 percent.
An Estimating Equations Approach for the LISCOMP Model.
ERIC Educational Resources Information Center
Reboussin, Beth A.; Liang, Kung-Lee
1998-01-01
A quadratic estimating equations approach for the LISCOMP model is proposed that only requires specification of the first two moments. This method is compared with a three-stage generalized least squares approach through a numerical study and application to a study of life events and neurotic illness. (SLD)
Commentary: Are Three Waves of Data Sufficient for Assessing Mediation?
ERIC Educational Resources Information Center
Reichardt, Charles S.
2011-01-01
Maxwell, Cole, and Mitchell (2011) demonstrated that simple structural equation models, when used with cross-sectional data, generally produce biased estimates of meditated effects. I extend those results by showing how simple structural equation models can produce biased estimates of meditated effects when used even with longitudinal data. Even…
Effects of Employing Ridge Regression in Structural Equation Models.
ERIC Educational Resources Information Center
McQuitty, Shaun
1997-01-01
LISREL 8 invokes a ridge option when maximum likelihood or generalized least squares are used to estimate a structural equation model with a nonpositive definite covariance or correlation matrix. Implications of the ridge option for model fit, parameter estimates, and standard errors are explored through two examples. (SLD)
An Estimation Theory for Differential Equations and other Problems, with Applications.
1981-11-01
order differential -8- operators and M-operators, in particular, the Perron - Frobenius theory and generalizations. Convergence theory for iterative... THEORY FOR DIFFERENTIAL 0EQUATIONS AND OTHER FROBLEMS, WITH APPLICATIONS 0 ,Final Technical Report by Johann Schr6der November, 1981 EUROPEAN RESEARCH...COVERED An estimation theory for differential equations Final Report and other problrms, with app)lications A981 6. PERFORMING ORG. RN,-ORT NUMfFR 7
Matsushita, Kunihiro; Mahmoodi, Bakhtawar K; Woodward, Mark; Emberson, Jonathan R; Jafar, Tazeen H; Jee, Sun Ha; Polkinghorne, Kevan R; Shankar, Anoop; Smith, David H; Tonelli, Marcello; Warnock, David G; Wen, Chi-Pang; Coresh, Josef; Gansevoort, Ron T; Hemmelgarn, Brenda R; Levey, Andrew S
2012-05-09
The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation more accurately estimates glomerular filtration rate (GFR) than the Modification of Diet in Renal Disease (MDRD) Study equation using the same variables, especially at higher GFR, but definitive evidence of its risk implications in diverse settings is lacking. To evaluate risk implications of estimated GFR using the CKD-EPI equation compared with the MDRD Study equation in populations with a broad range of demographic and clinical characteristics. A meta-analysis of data from 1.1 million adults (aged ≥ 18 years) from 25 general population cohorts, 7 high-risk cohorts (of vascular disease), and 13 CKD cohorts. Data transfer and analyses were conducted between March 2011 and March 2012. All-cause mortality (84,482 deaths from 40 cohorts), cardiovascular mortality (22,176 events from 28 cohorts), and end-stage renal disease (ESRD) (7644 events from 21 cohorts) during 9.4 million person-years of follow-up; the median of mean follow-up time across cohorts was 7.4 years (interquartile range, 4.2-10.5 years). Estimated GFR was classified into 6 categories (≥90, 60-89, 45-59, 30-44, 15-29, and <15 mL/min/1.73 m(2)) by both equations. Compared with the MDRD Study equation, 24.4% and 0.6% of participants from general population cohorts were reclassified to a higher and lower estimated GFR category, respectively, by the CKD-EPI equation, and the prevalence of CKD stages 3 to 5 (estimated GFR <60 mL/min/1.73 m(2)) was reduced from 8.7% to 6.3%. In estimated GFR of 45 to 59 mL/min/1.73 m(2) by the MDRD Study equation, 34.7% of participants were reclassified to estimated GFR of 60 to 89 mL/min/1.73 m(2) by the CKD-EPI equation and had lower incidence rates (per 1000 person-years) for the outcomes of interest (9.9 vs 34.5 for all-cause mortality, 2.7 vs 13.0 for cardiovascular mortality, and 0.5 vs 0.8 for ESRD) compared with those not reclassified. The corresponding adjusted hazard ratios were 0.80 (95% CI, 0.74-0.86) for all-cause mortality, 0.73 (95% CI, 0.65-0.82) for cardiovascular mortality, and 0.49 (95% CI, 0.27-0.88) for ESRD. Similar findings were observed in other estimated GFR categories by the MDRD Study equation. Net reclassification improvement based on estimated GFR categories was significantly positive for all outcomes (range, 0.06-0.13; all P < .001). Net reclassification improvement was similarly positive in most subgroups defined by age (<65 years and ≥65 years), sex, race/ethnicity (white, Asian, and black), and presence or absence of diabetes and hypertension. The results in the high-risk and CKD cohorts were largely consistent with the general population cohorts. The CKD-EPI equation classified fewer individuals as having CKD and more accurately categorized the risk for mortality and ESRD than did the MDRD Study equation across a broad range of populations.
Williams-Sether, Tara
2015-08-06
Annual peak-flow frequency data from 231 U.S. Geological Survey streamflow-gaging stations in North Dakota and parts of Montana, South Dakota, and Minnesota, with 10 or more years of unregulated peak-flow record, were used to develop regional regression equations for exceedance probabilities of 0.5, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 using generalized least-squares techniques. Updated peak-flow frequency estimates for 262 streamflow-gaging stations were developed using data through 2009 and log-Pearson Type III procedures outlined by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data. An average generalized skew coefficient was determined for three hydrologic zones in North Dakota. A StreamStats web application was developed to estimate basin characteristics for the regional regression equation analysis. Methods for estimating a weighted peak-flow frequency for gaged sites and ungaged sites are presented.
Oki, Delwyn S.; Rosa, Sarah N.; Yeung, Chiu W.
2010-01-01
This study provides an updated analysis of the magnitude and frequency of peak stream discharges in Hawai`i. Annual peak-discharge data collected by the U.S. Geological Survey during and before water year 2008 (ending September 30, 2008) at stream-gaging stations were analyzed. The existing generalized-skew value for the State of Hawai`i was retained, although three methods were used to evaluate whether an update was needed. Regional regression equations were developed for peak discharges with 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for unregulated streams (those for which peak discharges are not affected to a large extent by upstream reservoirs, dams, diversions, or other structures) in areas with less than 20 percent combined medium- and high-intensity development on Kaua`i, O`ahu, Moloka`i, Maui, and Hawai`i. The generalized-least-squares (GLS) regression equations relate peak stream discharge to quantified basin characteristics (for example, drainage-basin area and mean annual rainfall) that were determined using geographic information system (GIS) methods. Each of the islands of Kaua`i,O`ahu, Moloka`i, Maui, and Hawai`i was divided into two regions, generally corresponding to a wet region and a dry region. Unique peak-discharge regression equations were developed for each region. The regression equations developed for this study have standard errors of prediction ranging from 16 to 620 percent. Standard errors of prediction are greatest for regression equations developed for leeward Moloka`i and southern Hawai`i. In general, estimated 100-year peak discharges from this study are lower than those from previous studies, which may reflect the longer periods of record used in this study. Each regression equation is valid within the range of values of the explanatory variables used to develop the equation. The regression equations were developed using peak-discharge data from streams that are mainly unregulated, and they should not be used to estimate peak discharges in regulated streams. Use of a regression equation beyond its limits will produce peak-discharge estimates with unknown error and should therefore be avoided. Improved estimates of the magnitude and frequency of peak discharges in Hawai`i will require continued operation of existing stream-gaging stations and operation of additional gaging stations for areas such as Moloka`i and Hawai`i, where limited stream-gaging data are available.
Singer, Donald A.; Kouda, Ryoichi
2011-01-01
Empirical evidence indicates that processes affecting number and quantity of resources in geologic settings are very general across deposit types. Sizes of permissive tracts that geologically could contain the deposits are excellent predictors of numbers of deposits. In addition, total ore tonnage of mineral deposits of a particular type in a tract is proportional to the type’s median tonnage in a tract. Regressions using size of permissive tracts and median tonnage allow estimation of number of deposits and of total tonnage of mineralization. These powerful estimators, based on 10 different deposit types from 109 permissive worldwide control tracts, generalize across deposit types. Estimates of number of deposits and of total tonnage of mineral deposits are made by regressing permissive area, and mean (in logs) tons in deposits of the type, against number of deposits and total tonnage of deposits in the tract for the 50th percentile estimates. The regression equations (R2 = 0.91 and 0.95) can be used for all deposit types just by inserting logarithmic values of permissive area in square kilometers, and mean tons in deposits in millions of metric tons. The regression equations provide estimates at the 50th percentile, and other equations are provided for 90% confidence limits for lower estimates and 10% confidence limits for upper estimates of number of deposits and total tonnage. Equations for these percentile estimates along with expected value estimates are presented here along with comparisons with independent expert estimates. Also provided are the equations for correcting for the known well-explored deposits in a tract. These deposit-density models require internally consistent grade and tonnage models and delineations for arriving at unbiased estimates.
Revised techniques for estimating peak discharges from channel width in Montana
Parrett, Charles; Hull, J.A.; Omang, R.J.
1987-01-01
This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
Asquith, William H.; Roussel, Meghan C.
2009-01-01
Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed characteristics of drainage area, dimensionless main-channel slope, and mean annual precipitation. The residuals of the nine equations are spatially mapped, and residuals for the 10-year recurrence interval are selected for generalization to 1-degree latitude and longitude quadrangles. The generalized residual is referred to as the OmegaEM parameter and represents a generalized terrain and climate index that expresses peak-streamflow potential not otherwise represented in the three watershed characteristics. The OmegaEM parameter was assigned to each station, and using OmegaEM, nine additional regression equations are computed. Because of favorable diagnostics, the OmegaEM equations are expected to be generally reliable estimators of peak-streamflow frequency for undeveloped and ungaged stream locations in Texas. The mean residual standard error, adjusted R-squared, and percentage reduction of PRESS by use of OmegaEM are 0.30log10, 0.86, and -21 percent, respectively. Inclusion of the OmegaEM parameter provides a substantial reduction in the PRESS statistic of the regression equations and removes considerable spatial dependency in regression residuals. Although the OmegaEM parameter requires interpretation on the part of analysts and the potential exists that different analysts could estimate different values for a given watershed, the authors suggest that typical uncertainty in the OmegaEM estimate might be about +or-0.1010. Finally, given the two ensembles of equations reported herein and those in previous reports, hydrologic design engineers and other analysts have several different methods, which represent different analytical tracks, to make comparisons of peak-streamflow frequency estimates for ungaged watersheds in the study area.
The applicability of eGFR equations to different populations.
Delanaye, Pierre; Mariat, Christophe
2013-09-01
The Cockcroft-Gault equation for estimating glomerular filtration rate has been learnt by every generation of medical students over the decades. Since the publication of the Modification of Diet in Renal Disease (MDRD) study equation in 1999, however, the supremacy of the Cockcroft-Gault equation has been relentlessly disputed. More recently, the Chronic Kidney Disease Epidemiology (CKD-EPI) consortium has proposed a group of novel equations for estimating glomerular filtration rate (GFR). The MDRD and CKD-EPI equations were developed following a rigorous process, are expressed in a way in which they can be used with standardized biomarkers of GFR (serum creatinine and/or serum cystatin C) and have been evaluated in different populations of patients. Today, the MDRD Study equation and the CKD-EPI equation based on serum creatinine level have supplanted the Cockcroft-Gault equation. In many regards, these equations are superior to the Cockcroft-Gault equation and are now specifically recommended by international guidelines. With their generalized use, however, it has become apparent that those equations are not infallible and that they fail to provide an accurate estimate of GFR in certain situations frequently encountered in clinical practice. After describing the processes that led to the development of the new GFR-estimating equations, this Review discusses the clinical situations in which the applicability of these equations is questioned.
Developing a generalized allometric equation for aboveground biomass estimation
NASA Astrophysics Data System (ADS)
Xu, Q.; Balamuta, J. J.; Greenberg, J. A.; Li, B.; Man, A.; Xu, Z.
2015-12-01
A key potential uncertainty in estimating carbon stocks across multiple scales stems from the use of empirically calibrated allometric equations, which estimate aboveground biomass (AGB) from plant characteristics such as diameter at breast height (DBH) and/or height (H). The equations themselves contain significant and, at times, poorly characterized errors. Species-specific equations may be missing. Plant responses to their local biophysical environment may lead to spatially varying allometric relationships. The structural predictor may be difficult or impossible to measure accurately, particularly when derived from remote sensing data. All of these issues may lead to significant and spatially varying uncertainties in the estimation of AGB that are unexplored in the literature. We sought to quantify the errors in predicting AGB at the tree and plot level for vegetation plots in California. To accomplish this, we derived a generalized allometric equation (GAE) which we used to model the AGB on a full set of tree information such as DBH, H, taxonomy, and biophysical environment. The GAE was derived using published allometric equations in the GlobAllomeTree database. The equations were sparse in details about the error since authors provide the coefficient of determination (R2) and the sample size. A more realistic simulation of tree AGB should also contain the noise that was not captured by the allometric equation. We derived an empirically corrected variance estimate for the amount of noise to represent the errors in the real biomass. Also, we accounted for the hierarchical relationship between different species by treating each taxonomic level as a covariate nested within a higher taxonomic level (e.g. species < genus). This approach provides estimation under incomplete tree information (e.g. missing species) or blurred information (e.g. conjecture of species), plus the biophysical environment. The GAE allowed us to quantify contribution of each different covariate in estimating the AGB of trees. Lastly, we applied the GAE to an existing vegetation plot database - Forest Inventory and Analysis database - to derive per-tree and per-plot AGB estimations, their errors, and how much the error could be contributed to the original equations, the plant's taxonomy, and their biophysical environment.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Chen, Ying-Jen; Ho, Meng-Yang; Chen, Kwan-Ju; Hsu, Chia-Fen; Ryu, Shan-Jin
2009-08-01
The aims of the present study were to (i) investigate if traditional Chinese word reading ability can be used for estimating premorbid general intelligence; and (ii) to provide multiple regression equations for estimating premorbid performance on Raven's Standard Progressive Matrices (RSPM), using age, years of education and Chinese Graded Word Reading Test (CGWRT) scores as predictor variables. Four hundred and twenty-six healthy volunteers (201 male, 225 female), aged 16-93 years (mean +/- SD, 41.92 +/- 18.19 years) undertook the tests individually under supervised conditions. Seventy percent of subjects were randomly allocated to the derivation group (n = 296), and the rest to the validation group (n = 130). RSPM score was positively correlated with CGWRT score and years of education. RSPM and CGWRT scores and years of education were also inversely correlated with age, but the declining trend for RSPM performance against age was steeper than that for CGWRT performance. Separate multiple regression equations were derived for estimating RSPM scores using different combinations of age, years of education, and CGWRT score for both groups. The multiple regression coefficient of each equation ranged from 0.71 to 0.80 with the standard error of estimate between 7 and 8 RSPM points. When fitting the data of one group to the equations derived from its counterpart group, the cross-validation multiple regression coefficients ranged from 0.71 to 0.79. There were no significant differences in the 'predicted-obtained' RSPM discrepancies between any equations. The regression equations derived in the present study may provide a basis for estimating premorbid RSPM performance.
Program for computer aided reliability estimation
NASA Technical Reports Server (NTRS)
Mathur, F. P. (Inventor)
1972-01-01
A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.
Biological electric fields and rate equations for biophotons.
Alvermann, M; Srivastava, Y N; Swain, J; Widom, A
2015-04-01
Biophoton intensities depend upon the squared modulus of the electric field. Hence, we first make some general estimates about the inherent electric fields within various biosystems. Generally, these intensities do not follow a simple exponential decay law. After a brief discussion on the inapplicability of a linear rate equation that leads to strict exponential decay, we study other, nonlinear rate equations that have been successfully used for biosystems along with their physical origins when available.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
Eash, David A.; Barnes, Kimberlee K.
2017-01-01
A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic characteristics, landform regions, and soil regions. A comparison of root mean square errors and average standard errors of prediction for the statewide, regional, and region-of-influence regressions determined that the regional regression provided the best estimates of the seven selected statistics at ungaged sites in Iowa. Because a significant number of streams in Iowa reach zero flow as their minimum flow during low-flow years, four different types of regression analyses were used: left-censored, logistic, generalized-least-squares, and weighted-least-squares regression. A total of 192 streamgages were included in the development of 27 regression equations for the three low-flow regions. For the northeast and northwest regions, a censoring threshold was used to develop 12 left-censored regression equations to estimate the 6 low-flow frequency statistics for each region. For the southern region a total of 12 regression equations were developed; 6 logistic regression equations were developed to estimate the probability of zero flow for the 6 low-flow frequency statistics and 6 generalized least-squares regression equations were developed to estimate the 6 low-flow frequency statistics, if nonzero flow is estimated first by use of the logistic equations. A weighted-least-squares regression equation was developed for each region to estimate the harmonic-mean-flow statistic. Average standard errors of estimate for the left-censored equations for the northeast region range from 64.7 to 88.1 percent and for the northwest region range from 85.8 to 111.8 percent. Misclassification percentages for the logistic equations for the southern region range from 5.6 to 14.0 percent. Average standard errors of prediction for generalized least-squares equations for the southern region range from 71.7 to 98.9 percent and pseudo coefficients of determination for the generalized-least-squares equations range from 87.7 to 91.8 percent. Average standard errors of prediction for weighted-least-squares equations developed for estimating the harmonic-mean-flow statistic for each of the three regions range from 66.4 to 80.4 percent. The regression equations are applicable only to stream sites in Iowa with low flows not significantly affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. If the equations are used at ungaged sites on regulated streams, or on streams affected by water-supply and agricultural withdrawals, then the estimates will need to be adjusted by the amount of regulation or withdrawal to estimate the actual flow conditions if that is of interest. Caution is advised when applying the equations for basins with characteristics near the applicable limits of the equations and for basins located in karst topography. A test of two drainage-area ratio methods using 31 pairs of streamgages, for the annual 7-day mean low-flow statistic for a recurrence interval of 10 years, indicates a weighted drainage-area ratio method provides better estimates than regional regression equations for an ungaged site on a gaged stream in Iowa when the drainage-area ratio is between 0.5 and 1.4. These regression equations will be implemented within the U.S. Geological Survey StreamStats web-based geographic-information-system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the seven selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these seven selected statistics are provided for the streamgage.
Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.
Deboeck, Pascal R
2010-08-06
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.
Ahearn, Elizabeth A.
2004-01-01
Multiple linear-regression equations were developed to estimate the magnitudes of floods in Connecticut for recurrence intervals ranging from 2 to 500 years. The equations can be used for nonurban, unregulated stream sites in Connecticut with drainage areas ranging from about 2 to 715 square miles. Flood-frequency data and hydrologic characteristics from 70 streamflow-gaging stations and the upstream drainage basins were used to develop the equations. The hydrologic characteristics?drainage area, mean basin elevation, and 24-hour rainfall?are used in the equations to estimate the magnitude of floods. Average standard errors of prediction for the equations are 31.8, 32.7, 34.4, 35.9, 37.6 and 45.0 percent for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals, respectively. Simplified equations using only one hydrologic characteristic?drainage area?also were developed. The regression analysis is based on generalized least-squares regression techniques. Observed flows (log-Pearson Type III analysis of the annual maximum flows) from five streamflow-gaging stations in urban basins in Connecticut were compared to flows estimated from national three-parameter and seven-parameter urban regression equations. The comparison shows that the three- and seven- parameter equations used in conjunction with the new statewide equations generally provide reasonable estimates of flood flows for urban sites in Connecticut, although a national urban flood-frequency study indicated that the three-parameter equations significantly underestimated flood flows in many regions of the country. Verification of the accuracy of the three-parameter or seven-parameter national regression equations using new data from Connecticut stations was beyond the scope of this study. A technique for calculating flood flows at streamflow-gaging stations using a weighted average also is described. Two estimates of flood flows?one estimate based on the log-Pearson Type III analyses of the annual maximum flows at the gaging station, and the other estimate from the regression equation?are weighted together based on the years of record at the gaging station and the equivalent years of record value determined from the regression. Weighted averages of flood flows for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are tabulated for the 70 streamflow-gaging stations used in the regression analysis. Generally, weighted averages give the most accurate estimate of flood flows at gaging stations. An evaluation of the Connecticut's streamflow-gaging network was performed to determine whether the spatial coverage and range of geographic and hydrologic conditions are adequately represented for transferring flood characteristics from gaged to ungaged sites. Fifty-one of 54 stations in the current (2004) network support one or more flood needs of federal, state, and local agencies. Twenty-five of 54 stations in the current network are considered high-priority stations by the U.S. Geological Survey because of their contribution to the longterm understanding of floods, and their application for regionalflood analysis. Enhancements to the network to improve overall effectiveness for regionalization can be made by increasing the spatial coverage of gaging stations, establishing stations in regions of the state that are not well-represented, and adding stations in basins with drainage area sizes not represented. Additionally, the usefulness of the network for characterizing floods can be maintained and improved by continuing operation at the current stations because flood flows can be more accurately estimated at stations with continuous, long-term record.
Kennedy, Jeffrey R.; Paretti, Nicholas V.; Veilleux, Andrea G.
2014-01-01
Regression equations, which allow predictions of n-day flood-duration flows for selected annual exceedance probabilities at ungaged sites, were developed using generalized least-squares regression and flood-duration flow frequency estimates at 56 streamgaging stations within a single, relatively uniform physiographic region in the central part of Arizona, between the Colorado Plateau and Basin and Range Province, called the Transition Zone. Drainage area explained most of the variation in the n-day flood-duration annual exceedance probabilities, but mean annual precipitation and mean elevation were also significant variables in the regression models. Standard error of prediction for the regression equations varies from 28 to 53 percent and generally decreases with increasing n-day duration. Outside the Transition Zone there are insufficient streamgaging stations to develop regression equations, but flood-duration flow frequency estimates are presented at select streamgaging stations.
NASA Astrophysics Data System (ADS)
Chen, Gui-Qiang G.; Schrecker, Matthew R. I.
2018-04-01
We are concerned with globally defined entropy solutions to the Euler equations for compressible fluid flows in transonic nozzles with general cross-sectional areas. Such nozzles include the de Laval nozzles and other more general nozzles whose cross-sectional area functions are allowed at the nozzle ends to be either zero (closed ends) or infinity (unbounded ends). To achieve this, in this paper, we develop a vanishing viscosity method to construct globally defined approximate solutions and then establish essential uniform estimates in weighted L p norms for the whole range of physical adiabatic exponents γ\\in (1, ∞) , so that the viscosity approximate solutions satisfy the general L p compensated compactness framework. The viscosity method is designed to incorporate artificial viscosity terms with the natural Dirichlet boundary conditions to ensure the uniform estimates. Then such estimates lead to both the convergence of the approximate solutions and the existence theory of globally defined finite-energy entropy solutions to the Euler equations for transonic flows that may have different end-states in the class of nozzles with general cross-sectional areas for all γ\\in (1, ∞) . The approach and techniques developed here apply to other problems with similar difficulties. In particular, we successfully apply them to construct globally defined spherically symmetric entropy solutions to the Euler equations for all γ\\in (1, ∞).
Joao P. Carvalho; Bernard R. Parresol
2005-01-01
This paper presents a growth model for dominant-height and site-quality estimations for Pyrenean oak (Quercus pyrenaica Willd.) stands. The BertalanffyâRichards function is used with the generalized algebraic difference approach to derive a dynamic site equation. This allows dominant-height and site-index estimations in a compatible way, using any...
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
Stature estimation equations for South Asian skeletons based on DXA scans of contemporary adults.
Pomeroy, Emma; Mushrif-Tripathy, Veena; Wells, Jonathan C K; Kulkarni, Bharati; Kinra, Sanjay; Stock, Jay T
2018-05-03
Stature estimation from the skeleton is a classic anthropological problem, and recent years have seen the proliferation of population-specific regression equations. Many rely on the anatomical reconstruction of stature from archaeological skeletons to derive regression equations based on long bone lengths, but this requires a collection with very good preservation. In some regions, for example, South Asia, typical environmental conditions preclude the sufficient preservation of skeletal remains. Large-scale epidemiological studies that include medical imaging of the skeleton by techniques such as dual-energy X-ray absorptiometry (DXA) offer new potential datasets for developing such equations. We derived estimation equations based on known height and bone lengths measured from DXA scans from the Andhra Pradesh Children and Parents Study (Hyderabad, India). Given debates on the most appropriate regression model to use, multiple methods were compared, and the performance of the equations was tested on a published skeletal dataset of individuals with known stature. The equations have standard errors of estimates and prediction errors similar to those derived using anatomical reconstruction or from cadaveric datasets. As measured by the number of significant differences between true and estimated stature, and the prediction errors, the new equations perform as well as, and generally better than, published equations commonly used on South Asian skeletons or based on Indian cadaveric datasets. This study demonstrates the utility of DXA scans as a data source for developing stature estimation equations and offer a new set of equations for use with South Asian datasets. © 2018 Wiley Periodicals, Inc.
Bjerklie, David M.; Dingman, S. Lawrence; Bolster, Carl H.
2005-01-01
A set of conceptually derived in‐bank river discharge–estimating equations (models), based on the Manning and Chezy equations, are calibrated and validated using a database of 1037 discharge measurements in 103 rivers in the United States and New Zealand. The models are compared to a multiple regression model derived from the same data. The comparison demonstrates that in natural rivers, using an exponent on the slope variable of 0.33 rather than the traditional value of 0.5 reduces the variance associated with estimating flow resistance. Mean model uncertainty, assuming a constant value for the conductance coefficient, is less than 5% for a large number of estimates, and 67% of the estimates would be accurate within 50%. The models have potential application where site‐specific flow resistance information is not available and can be the basis for (1) a general approach to estimating discharge from remotely sensed hydraulic data, (2) comparison to slope‐area discharge estimates, and (3) large‐scale river modeling.
Updated generalized biomass equations for North American tree species
David C. Chojnacky; Linda S. Heath; Jennifer C. Jenkins
2014-01-01
Historically, tree biomass at large scales has been estimated by applying dimensional analysis techniques and field measurements such as diameter at breast height (dbh) in allometric regression equations. Equations often have been developed using differing methods and applied only to certain species or isolated areas. We previously had compiled and combined (in meta-...
Westgate, Philip M.
2016-01-01
When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539
Westgate, Philip M
2016-01-01
When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.
Reaeration equations derived from U.S. geological survey database
Melching, C.S.; Flores, H.E.
1999-01-01
Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.
Lombard, Pamela J.; Hodgkins, Glenn A.
2015-01-01
Regression equations to estimate peak streamflows with 1- to 500-year recurrence intervals (annual exceedance probabilities from 99 to 0.2 percent, respectively) were developed for small, ungaged streams in Maine. Equations presented here are the best available equations for estimating peak flows at ungaged basins in Maine with drainage areas from 0.3 to 12 square miles (mi2). Previously developed equations continue to be the best available equations for estimating peak flows for basin areas greater than 12 mi2. New equations presented here are based on streamflow records at 40 U.S. Geological Survey streamgages with a minimum of 10 years of recorded peak flows between 1963 and 2012. Ordinary least-squares regression techniques were used to determine the best explanatory variables for the regression equations. Traditional map-based explanatory variables were compared to variables requiring field measurements. Two field-based variables—culvert rust lines and bankfull channel widths—either were not commonly found or did not explain enough of the variability in the peak flows to warrant inclusion in the equations. The best explanatory variables were drainage area and percent basin wetlands; values for these variables were determined with a geographic information system. Generalized least-squares regression was used with these two variables to determine the equation coefficients and estimates of accuracy for the final equations.
Development of weight and cost estimates for lifting surfaces with active controls
NASA Technical Reports Server (NTRS)
Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.
1976-01-01
Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.
2003-01-01
Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.
August Median Streamflow on Ungaged Streams in Eastern Aroostook County, Maine
Lombard, Pamela J.; Tasker, Gary D.; Nielsen, Martha G.
2003-01-01
Methods for estimating August median streamflow were developed for ungaged, unregulated streams in the eastern part of Aroostook County, Maine, with drainage areas from 0.38 to 43 square miles and mean basin elevations from 437 to 1,024 feet. Few long-term, continuous-record streamflow-gaging stations with small drainage areas were available from which to develop the equations; therefore, 24 partial-record gaging stations were established in this investigation. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record stations was applied by relating base-flow measurements at these stations to concurrent daily flows at nearby long-term, continuous-record streamflow- gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for varying periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Twenty-three partial-record stations and one continuous-record station were used for the final regression equations. The basin characteristics of drainage area and mean basin elevation are used in the calculated regression equation for ungaged streams to estimate August median flow. The equation has an average standard error of prediction from -38 to 62 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -40 to 67 percent. Model error is larger than sampling error for both equations, indicating that additional basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow, which can be used when making estimates at partial-record or continuous-record gaging stations, range from 0.03 to 11.7 cubic feet per second or from 0.1 to 0.4 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in the eastern part of Aroostook County, within the range of acceptable explanatory variables, range from 0.03 to 30 cubic feet per second or 0.1 to 0.7 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as mean elevation and drainage area increase.
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
Lewis, Jason M.
2010-01-01
Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.
NASA Technical Reports Server (NTRS)
Dardner, B. R.; Blad, B. L.; Thompson, D. R.; Henderson, K. E.
1985-01-01
Reflectance and agronomic Thematic Mapper (TM) data were analyzed to determine possible data transformations for evaluating several plant parameters of corn. Three transformation forms were used: the ratio of two TM bands, logarithms of two-band ratios, and normalized differences of two bands. Normalized differences and logarithms of two-band ratios responsed similarly in the equations for estimating the plant growth parameters evaluated in this study. Two-term equations were required to obtain the maximum predictability of percent ground cover, canopy moisture content, and total wet phytomass. Standard error of estimate values were 15-26 percent lower for two-term estimates of these parameters than for one-term estimates. The terms log(TM4/TM2) and (TM4/TM5) produced the maximum predictability for leaf area and dry green leaf weight, respectively. The middle infrared bands TM5 and TM7 are essential for maximizing predictability for all measured plant parameters except leaf area index. The estimating models were evaluated over bare soil to discriminate between equations which are statistically similar. Qualitative interpretations of the resulting prediction equations are consistent with general agronomic and remote sensing theory.
Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research
Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.
2017-01-01
Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged site are provided. StreamStats also allows users to click on any Iowa streamgage to obtain computed estimates for the six selected spring and fall low-flow statistics.
Estimation of Flood Discharges at Selected Recurrence Intervals for Streams in New Hampshire
Olson, Scott A.
2009-01-01
This report provides estimates of flood discharges at selected recurrence intervals for streamgages in and adjacent to New Hampshire and equations for estimating flood discharges at recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, and 500-years for ungaged, unregulated, rural streams in New Hampshire. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 117 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, mean April precipitation, percentage of wetland area, and main channel slope. The average standard error of prediction for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence interval flood discharges with these equations are 30.0, 30.8, 32.0, 34.2, 36.0, 38.1, and 43.4 percent, respectively. Flood discharges at selected recurrence intervals for selected streamgages were computed following the guidelines in Bulletin 17B of the U.S. Interagency Advisory Committee on Water Data. To determine the flood-discharge exceedence probabilities at streamgages in New Hampshire, a new generalized skew coefficient map covering the State was developed. The standard error of the data on new map is 0.298. To improve estimates of flood discharges at selected recurrence intervals for 20 streamgages with short-term records (10 to 15 years), record extension using the two-station comparison technique was applied. The two-station comparison method uses data from a streamgage with long-term record to adjust the frequency characteristics at a streamgage with a short-term record. A technique for adjusting a flood-discharge frequency curve computed from a streamgage record with results from the regression equations is described in this report. Also, a technique is described for estimating flood discharge at a selected recurrence interval for an ungaged site upstream or downstream from a streamgage using a drainage-area adjustment. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
Prediction of distribution coefficient from structure. 1. Estimation method.
Csizmadia, F; Tsantili-Kakoulidou, A; Panderi, I; Darvas, F
1997-07-01
A method has been developed for the estimation of the distribution coefficient (D), which considers the microspecies of a compound. D is calculated from the microscopic dissociation constants (microconstants), the partition coefficients of the microspecies, and the counterion concentration. A general equation for the calculation of D at a given pH is presented. The microconstants are calculated from the structure using Hammett and Taft equations. The partition coefficients of the ionic microspecies are predicted by empirical equations using the dissociation constants and the partition coefficient of the uncharged species, which are estimated from the structure by a Linear Free Energy Relationship method. The algorithm is implemented in a program module called PrologD.
Earley, Amy; Miskulin, Dana; Lamb, Edmund J; Levey, Andrew S; Uhlig, Katrin
2012-06-05
Clinical laboratories are increasingly reporting estimated glomerular filtration rate (GFR) by using serum creatinine assays traceable to a standard reference material. To review the performance of GFR estimating equations to inform the selection of a single equation by laboratories and the interpretation of estimated GFR by clinicians. A systematic search of MEDLINE, without language restriction, between 1999 and 21 October 2011. Cross-sectional studies in adults that compared the performance of 2 or more creatinine-based GFR estimating equations with a reference GFR measurement. Eligible equations were derived or reexpressed and validated by using creatinine measurements traceable to the standard reference material. Reviewers extracted data on study population characteristics, measured GFR, creatinine assay, and equation performance. Eligible studies compared the MDRD (Modification of Diet in Renal Disease) Study and CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) equations or modifications thereof. In 12 studies in North America, Europe, and Australia, the CKD-EPI equation performed better at higher GFRs (approximately >60 mL/min per 1.73 m(2)) and the MDRD Study equation performed better at lower GFRs. In 5 of 8 studies in Asia and Africa, the equations were modified to improve their performance by adding a coefficient derived in the local population or removing a coefficient. Methods of GFR measurement and study populations were heterogeneous. Neither the CKD-EPI nor the MDRD Study equation is optimal for all populations and GFR ranges. Using a single equation for reporting requires a tradeoff to optimize performance at either higher or lower GFR ranges. A general practice and public health perspective favors the CKD-EPI equation. Kidney Disease: Improving Global Outcomes.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
van der Velde-Koerts, Trijntje; Breysse, Nicolas; Pattingre, Lauriane; Hamey, Paul Y; Lutze, Jason; Mahieu, Karin; Margerison, Sam; Ossendorp, Bernadette C; Reich, Hermine; Rietveld, Anton; Sarda, Xavier; Vial, Gaelle; Sieke, Christian
2018-06-03
In 2015 a scientific workshop was held in Geneva, where updating the International Estimate of Short-Term Intake (IESTI) equations was suggested. This paper studies the effects of the proposed changes in residue inputs, large portions, variability factors and unit weights on the overall short-term dietary exposure estimate. Depending on the IESTI case equation, a median increase in estimated overall exposure by a factor of 1.0-6.8 was observed when the current IESTI equations are replaced by the proposed IESTI equations. The highest increase in the estimated exposure arises from the replacement of the median residue (STMR) by the maximum residue limit (MRL) for bulked and blended commodities (case 3 equations). The change in large portion parameter does not have a significant impact on the estimated exposure. The use of large portions derived from the general population covering all age groups and bodyweights should be avoided when large portions are not expressed on an individual bodyweight basis. Replacement of the highest residue (HR) by the MRL and removal of the unit weight each increase the estimated exposure for small-, medium- and large-sized commodities (case 1, case 2a or case 2b equations). However, within the EU framework lowering of the variability factor from 7 or 5 to 3 counterbalances the effect of changes in other parameters, resulting in an estimated overall exposure change for the EU situation of a factor of 0.87-1.7 and 0.6-1.4 for IESTI case 2a and case 2b equations, respectively.
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
Box compression analysis of world-wide data spanning 46 years
Thomas J. Urbanik; Benjamin Frank
2006-01-01
The state of the art among most industry citations of box compression estimation is the equation by McKee developed in 1963. Because of limitations in computing tools at the time the McKee equation was developed, the equation is a simplification, with many constraints, of a more general relationship. By applying the results of sophisticated finite element modeling, in...
Yelland, Lisa N; Salter, Amy B; Ryan, Philip
2011-10-15
Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
Rank-preserving regression: a more robust rank regression model against outliers.
Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M
2016-08-30
Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Modeling individualized coefficient alpha to measure quality of test score data.
Liu, Molei; Hu, Ming; Zhou, Xiao-Hua
2018-05-23
Individualized coefficient alpha is defined. It is item and subject specific and is used to measure the quality of test score data with heterogenicity among the subjects and items. A regression model is developed based on 3 sets of generalized estimating equations. The first set of generalized estimating equation models the expectation of the responses, the second set models the response's variance, and the third set is proposed to estimate the individualized coefficient alpha, defined and used to measure individualized internal consistency of the responses. We also use different techniques to extend our method to handle missing data. Asymptotic property of the estimators is discussed, based on which inference on the coefficient alpha is derived. Performance of our method is evaluated through simulation study and real data analysis. The real data application is from a health literacy study in Hunan province of China. Copyright © 2018 John Wiley & Sons, Ltd.
Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.
2013-01-01
A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97.9 percent for flood region 2, and 92.4 to 96.0 percent for flood region 3. The regression equations are applicable only to stream sites in Iowa with flows not significantly affected by regulation, diversion, channelization, backwater, or urbanization and with basin characteristics within the range of those used to develop the equations. These regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the eight selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided by the Web-based tool. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these eight selected statistics are provided for the streamgage.
August median streamflow on ungaged streams in Eastern Coastal Maine
Lombard, Pamela J.
2004-01-01
Methods for estimating August median streamflow were developed for ungaged, unregulated streams in eastern coastal Maine. The methods apply to streams with drainage areas ranging in size from 0.04 to 73.2 square miles and fraction of basin underlain by a sand and gravel aquifer ranging from 0 to 71 percent. The equations were developed with data from three long-term (greater than or equal to 10 years of record) continuous-record streamflow-gaging stations, 23 partial-record streamflow- gaging stations, and 5 short-term (less than 10 years of record) continuous-record streamflow-gaging stations. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record streamflow-gaging stations and short-term continuous-record streamflow-gaging stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term continuous-record streamflow-gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at streamflow-gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for different periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Thirty-one stations were used for the final regression equations. Two basin characteristics?drainage area and fraction of basin underlain by a sand and gravel aquifer?are used in the calculated regression equation to estimate August median streamflow for ungaged streams. The equation has an average standard error of prediction from -27 to 38 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -30 to 43 percent. Model error is larger than sampling error for both equations, indicating that additional or improved estimates of basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow at partial- record or continuous-record gaging stations range from 0.003 to 31.0 cubic feet per second or from 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in eastern coastal Maine, within the range of acceptable explanatory variables, range from 0.003 to 45 cubic feet per second or 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as drainage area and fraction of basin underlain by a sand and gravel aquifer increase.
Diffusion phenomenon for linear dissipative wave equations in an exterior domain
NASA Astrophysics Data System (ADS)
Ikehata, Ryo
Under the general condition of the initial data, we will derive the crucial estimates which imply the diffusion phenomenon for the dissipative linear wave equations in an exterior domain. In order to derive the diffusion phenomenon for dissipative wave equations, the time integral method which was developed by Ikehata and Matsuyama (Sci. Math. Japon. 55 (2002) 33) plays an effective role.
Why Might Relative Fit Indices Differ between Estimators?
ERIC Educational Resources Information Center
Weng, Li-Jen; Cheng, Chung-Ping
1997-01-01
Relative fit indices using the null model as the reference point in computation may differ across estimation methods, as this article illustrates by comparing maximum likelihood, ordinary least squares, and generalized least squares estimation in structural equation modeling. The illustration uses a covariance matrix for six observed variables…
Semi-analytical approach to estimate railroad tank car shell puncture
DOT National Transportation Integrated Search
2011-03-16
This paper describes the development of engineering-based equations to estimate the puncture resistance of railroad tank cars under a generalized shell or side impact scenario. Resistance to puncture is considered in terms of puncture velocity, which...
Roland, Mark A.; Stuckey, Marla H.
2008-01-01
Regression equations were developed for estimating flood flows at selected recurrence intervals for ungaged streams in Pennsylvania with drainage areas less than 2,000 square miles. These equations were developed utilizing peak-flow data from 322 streamflow-gaging stations within Pennsylvania and surrounding states. All stations used in the development of the equations had 10 or more years of record and included active and discontinued continuous-record as well as crest-stage partial-record stations. The state was divided into four regions, and regional regression equations were developed to estimate the 2-, 5-, 10-, 50-, 100-, and 500-year recurrence-interval flood flows. The equations were developed by means of a regression analysis that utilized basin characteristics and flow data associated with the stations. Significant explanatory variables at the 95-percent confidence level for one or more regression equations included the following basin characteristics: drainage area; mean basin elevation; and the percentages of carbonate bedrock, urban area, and storage within a basin. The regression equations can be used to predict the magnitude of flood flows for specified recurrence intervals for most streams in the state; however, they are not valid for streams with drainage areas generally greater than 2,000 square miles or with substantial regulation, diversion, or mining activity within the basin. Estimates of flood-flow magnitude and frequency for streamflow-gaging stations substantially affected by upstream regulation are also presented.
Jennings, M.E.; Thomas, W.O.; Riggs, H.C.
1994-01-01
For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.
van Noort, Paul C M
2009-06-01
Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.
Streamflow characteristics related to channel geometry of streams in western United States
Hedman, E.R.; Osterkamp, W.R.
1982-01-01
Assessment of surface-mining and reclamation activities generally requires extensive hydrologic data. Adequate streamflow data from instrumented gaging stations rarely are available, and estimates of surface- water discharge based on rainfall-runoff models, drainage area, and basin characteristics sometimes have proven unreliable. Channel-geometry measurements offer an alternative method of quickly and inexpensively estimating stream-flow characteristics for ungaged streams. The method uses the empirical development of equations to yield a discharge value from channel-geometry and channel-material data. The equations are developed by collecting data at numerous streamflow-gaging sites and statistically relating those data to selected discharge characteristics. Mean annual runoff and flood discharges with selected recurrence intervals can be estimated for perennial, intermittent, and ephemeral streams. The equations were developed from data collected in the western one-half of the conterminous United States. The effect of the channel-material and runoff characteristics are accounted for with the equations.
Langer, Raquel D; Matias, Catarina N; Borges, Juliano H; Cirolini, Vagner X; Páscoa, Mauro A; Guerra-Júnior, Gil; Gonçalves, Ezequiel M
2018-03-26
Bioelectrical impedance analysis (BIA) is a practical and rapid method for making a longitudinal analysis of changes in body composition. However, most BIA validation studies have been performed in a clinical population and only at one moment, or point in time (cross-sectional study). The aim of this study is to investigate the accuracy of predictive equations based on BIA with regard to the changes in fat-free mass (FFM) in Brazilian male army cadets after 7 mo of military training. The values used were determined using dual-energy X-ray absorptiometry (DXA) as a reference method. The study included 310 male Brazilian Army cadets (aged 17-24 yr). FFM was measured using eight general predictive BIA equations, with one equation specifically applied to this population sample, and the values were compared with results obtained using DXA. The student's t-test, adjusted coefficient of determination (R2), standard error of estimation (SEE), Lin's approach, and the Bland-Altman test were used to determine the accuracy of the predictive BIA equations used to estimate FFM in this population and between the two moments (pre- and post-moment). The FFM measured using the nine predictive BIA equations, and determined using DXA at the post-moment, showed a significant increase when compared with the pre-moment (p < 0.05). All nine predictive BIA equations were able to detect FFM changes in the army cadets between the two moments in a very similar way to the reference method (DXA). However, only the one BIA equation specific to this population showed no significant differences in the FFM estimation between DXA at pre- and post-moment of military routine. All predictive BIA equations showed large limits of agreement using the Bland-Altman approach. The eight general predictive BIA equations used in this study were not found to be valid for analyzing the FFM changes in the Brazilian male army cadets, after a period of approximately 7 mo of military training. Although the BIA equation specific to this population is dependent on the amount of FFM, it appears to be a good alternative to DXA for assessing FFM in Brazilian male army cadets.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Unstable solitary-wave solutions of the generalized Benjamin-Bona-Mahony equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, W.R.; Restrepo, J.M.; Bona, J.L.
1994-06-01
The evolution of solitary waves of the gBBM equation is investigated computationally. The experiments confirm previously derived theoretical stability estimates and, more importantly, yield insights into their behavior. For example, highly energetic unstable solitary waves when perturbed are shown to evolve into several stable solitary waves.
The pEst version 2.1 user's manual
NASA Technical Reports Server (NTRS)
Murray, James E.; Maine, Richard E.
1987-01-01
This report is a user's manual for version 2.1 of pEst, a FORTRAN 77 computer program for interactive parameter estimation in nonlinear dynamic systems. The pEst program allows the user complete generality in definig the nonlinear equations of motion used in the analysis. The equations of motion are specified by a set of FORTRAN subroutines; a set of routines for a general aircraft model is supplied with the program and is described in the report. The report also briefly discusses the scope of the parameter estimation problem the program addresses. The report gives detailed explanations of the purpose and usage of all available program commands and a description of the computational algorithms used in the program.
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
NASA Astrophysics Data System (ADS)
Wang, Yu-Zhu; Wei, Changhua
2018-04-01
In this paper, we investigate the initial value problem for the generalized double dispersion equation in R^n. Weighted decay estimate and asymptotic profile of global solutions are established for n≥3 . The global existence result was already proved by Kawashima and the first author in Kawashima and Wang (Anal Appl 13:233-254, 2015). Here, we show that the nonlinear term plays an important role in this asymptotic profile.
A concept for a fuel efficient flight planning aid for general aviation
NASA Technical Reports Server (NTRS)
Collins, B. P.; Haines, A. L.; Wales, C. J.
1982-01-01
A core equation for estimation of fuel burn from path profile data was developed. This equation was used as a necessary ingredient in a dynamic program to define a fuel efficient flight path. The resultant algorithm is oriented toward use by general aviation. The pilot provides a description of the desired ground track, standard aircraft parameters, and weather at selected waypoints. The algorithm then derives the fuel efficient altitudes and velocities at the waypoints.
2006-06-01
Soil Loss Equation ( USLE ) and the Revised Universal Soil Loss Equation (RUSLE) continue to be widely accepted methods for estimating sediment loss...range areas. Therefore, a generalized design methodology using the Universal Soil Loss Equation ( USLE ) is presented to accommodate the variations...constructed use the slope most suitable to the area topography (3:1 or 4:1). Step 4: Using the Universal Soil Loss equation, USLE , find the values of A
Rogers, Paul; Stoner, Julie
2016-01-01
Regression models for correlated binary outcomes are commonly fit using a Generalized Estimating Equations (GEE) methodology. GEE uses the Liang and Zeger sandwich estimator to produce unbiased standard error estimators for regression coefficients in large sample settings even when the covariance structure is misspecified. The sandwich estimator performs optimally in balanced designs when the number of participants is large, and there are few repeated measurements. The sandwich estimator is not without drawbacks; its asymptotic properties do not hold in small sample settings. In these situations, the sandwich estimator is biased downwards, underestimating the variances. In this project, a modified form for the sandwich estimator is proposed to correct this deficiency. The performance of this new sandwich estimator is compared to the traditional Liang and Zeger estimator as well as alternative forms proposed by Morel, Pan and Mancl and DeRouen. The performance of each estimator was assessed with 95% coverage probabilities for the regression coefficient estimators using simulated data under various combinations of sample sizes and outcome prevalence values with an Independence (IND), Autoregressive (AR) and Compound Symmetry (CS) correlation structure. This research is motivated by investigations involving rare-event outcomes in aviation data. PMID:26998504
1979-02-01
coefficient (at equilibrium) when hysteresis is apparent. 6. Coefficient n in Freundlich equation for 1/n soil or sediment adsorption isotherms ýX - KC . 7...Biodegradation Chemical structures cal clasaes (e.g., Diffusion Correlations phenols). General Diffusion coefficients Equations terms for organic...OF THE FATE AND TRANSPORT OF ORGANIC CHEMICALS Adsorption coefficients: K, n* from Freundlich equation + Desorption coefficients: K’*, n’* from
Prediction of the Maximum Number of Repetitions and Repetitions in Reserve From Barbell Velocity.
García-Ramos, Amador; Torrejón, Alejandro; Feriche, Belén; Morales-Artacho, Antonio J; Pérez-Castilla, Alejandro; Padial, Paulino; Haff, Guy Gregory
2018-03-01
To provide 2 general equations to estimate the maximum possible number of repetitions (XRM) from the mean velocity (MV) of the barbell and the MV associated with a given number of repetitions in reserve, as well as to determine the between-sessions reliability of the MV associated with each XRM. After determination of the bench-press 1-repetition maximum (1RM; 1.15 ± 0.21 kg/kg body mass), 21 men (age 23.0 ± 2.7 y, body mass 72.7 ± 8.3 kg, body height 1.77 ± 0.07 m) completed 4 sets of as many repetitions as possible against relative loads of 60%1RM, 70%1RM, 80%1RM, and 90%1RM over 2 separate sessions. The different loads were tested in a randomized order with 10 min of rest between them. All repetitions were performed at the maximum intended velocity. Both the general equation to predict the XRM from the fastest MV of the set (CV = 15.8-18.5%) and the general equation to predict MV associated with a given number of repetitions in reserve (CV = 14.6-28.8%) failed to provide data with acceptable between-subjects variability. However, a strong relationship (median r 2 = .984) and acceptable reliability (CV < 10% and ICC > .85) were observed between the fastest MV of the set and the XRM when considering individual data. These results indicate that generalized group equations are not acceptable methods for estimating the XRM-MV relationship or the number of repetitions in reserve. When attempting to estimate the XRM-MV relationship, one must use individualized relationships to objectively estimate the exact number of repetitions that can be performed in a training set.
Ries(compiler), Kernell G.; With sections by Atkins, J. B.; Hummel, P.R.; Gray, Matthew J.; Dusenbury, R.; Jennings, M.E.; Kirby, W.H.; Riggs, H.C.; Sauer, V.B.; Thomas, W.O.
2007-01-01
The National Streamflow Statistics (NSS) Program is a computer program that should be useful to engineers, hydrologists, and others for planning, management, and design applications. NSS compiles all current U.S. Geological Survey (USGS) regional regression equations for estimating streamflow statistics at ungaged sites in an easy-to-use interface that operates on computers with Microsoft Windows operating systems. NSS expands on the functionality of the USGS National Flood Frequency Program, and replaces it. The regression equations included in NSS are used to transfer streamflow statistics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, the equations were developed on a statewide or metropolitan-area basis as part of cooperative study programs. Equations are available for estimating rural and urban flood-frequency statistics, such as the 1 00-year flood, for every state, for Puerto Rico, and for the island of Tutuila, American Samoa. Equations are available for estimating other statistics, such as the mean annual flow, monthly mean flows, flow-duration percentiles, and low-flow frequencies (such as the 7-day, 0-year low flow) for less than half of the states. All equations available for estimating streamflow statistics other than flood-frequency statistics assume rural (non-regulated, non-urbanized) conditions. The NSS output provides indicators of the accuracy of the estimated streamflow statistics. The indicators may include any combination of the standard error of estimate, the standard error of prediction, the equivalent years of record, or 90 percent prediction intervals, depending on what was provided by the authors of the equations. The program includes several other features that can be used only for flood-frequency estimation. These include the ability to generate flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals, estimates of the probable maximum flood, extrapolation of the 500-year flood when an equation for estimating it is not available, and weighting techniques to improve flood-frequency estimates for gaging stations and ungaged sites on gaged streams. This report describes the regionalization techniques used to develop the equations in NSS and provides guidance on the applicability and limitations of the techniques. The report also includes a users manual and a summary of equations available for estimating basin lagtime, which is needed by the program to generate flood hydrographs. The NSS software and accompanying database, and the documentation for the regression equations included in NSS, are available on the Web at http://water.usgs.gov/software/.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
Aloisio, Kathryn M.; Swanson, Sonja A.; Micali, Nadia; Field, Alison; Horton, Nicholas J.
2015-01-01
Clustered data arise in many settings, particularly within the social and biomedical sciences. As an example, multiple–source reports are commonly collected in child and adolescent psychiatric epidemiologic studies where researchers use various informants (e.g. parent and adolescent) to provide a holistic view of a subject’s symptomatology. Fitzmaurice et al. (1995) have described estimation of multiple source models using a standard generalized estimating equation (GEE) framework. However, these studies often have missing data due to additional stages of consent and assent required. The usual GEE is unbiased when missingness is Missing Completely at Random (MCAR) in the sense of Little and Rubin (2002). This is a strong assumption that may not be tenable. Other options such as weighted generalized estimating equations (WEEs) are computationally challenging when missingness is non–monotone. Multiple imputation is an attractive method to fit incomplete data models while only requiring the less restrictive Missing at Random (MAR) assumption. Previously estimation of partially observed clustered data was computationally challenging however recent developments in Stata have facilitated their use in practice. We demonstrate how to utilize multiple imputation in conjunction with a GEE to investigate the prevalence of disordered eating symptoms in adolescents reported by parents and adolescents as well as factors associated with concordance and prevalence. The methods are motivated by the Avon Longitudinal Study of Parents and their Children (ALSPAC), a cohort study that enrolled more than 14,000 pregnant mothers in 1991–92 and has followed the health and development of their children at regular intervals. While point estimates were fairly similar to the GEE under MCAR, the MAR model had smaller standard errors, while requiring less stringent assumptions regarding missingness. PMID:25642154
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
A water budget is an accounting of water movement into and out of, and storage change within, some control volume. Universal and adaptable are adjectives that reflect key features of water-budget methods for estimating recharge. The universal concept of mass conservation of water implies that water-budget methods are applicable over any space and time scales (Healy et al., 2007). The water budget of a soil column in a laboratory can be studied at scales of millimeters and seconds. A water-budget equation is also an integral component of atmospheric general circulation models used to predict global climates over periods of decades or more. Water-budget equations can be easily customized by adding or removing terms to accurately portray the peculiarities of any hydrologic system. The equations are generally not bound by assumptions on mechanisms by which water moves into, through, and out of the control volume of interest. So water-budget methods can be used to estimate both diffuse and focused recharge, and recharge estimates are unaffected by phenomena such as preferential flow paths within the unsaturated zone.Water-budget methods represent the largest class of techniques for estimating recharge. Most hydrologic models are derived from a water-budget equation and can therefore be classified as water-budget models. It is not feasible to address all water-budget methods in a single chapter. This chapter is limited to discussion of the “residual” water-budget approach, whereby all variables in a water-budget equation, except for recharge, are independently measured or estimated and recharge is set equal to the residual. This chapter is closely linked with Chapter 3, on modeling methods, because the equations presented here form the basis of many models and because models are often used to estimate individual components in water-budget studies. Water budgets for streams and other surface-water bodies are addressed in Chapter 4. The use of soil-water budgets and lysimeters for determining potential recharge and evapotranspiration from changes in water storage is discussed in Chapter 5. Aquifer water-budget methods based on the measurement of groundwater levels are described in Chapter 6.
Polynomial mixture method of solving ordinary differential equations
NASA Astrophysics Data System (ADS)
Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.
2017-11-01
In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).
General very special relativity in Finsler cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouretsis, A. P.; Stathakopoulos, M.; Stavrinos, P. C.
2009-05-15
General very special relativity (GVSR) is the curved space-time of very special relativity (VSR) proposed by Cohen and Glashow. The geometry of general very special relativity possesses a line element of Finsler geometry introduced by Bogoslovsky. We calculate the Einstein field equations and derive a modified Friedmann-Robertson-Walker cosmology for an osculating Riemannian space. The Friedmann equation of motion leads to an explanation of the cosmological acceleration in terms of an alternative non-Lorentz invariant theory. A first order approach for a primordial-spurionic vector field introduced into the metric gives back an estimation of the energy evolution and inflation.
Analysis of cohort studies with multivariate and partially observed disease classification data.
Chatterjee, Nilanjan; Sinha, Samiran; Diver, W Ryan; Feigelson, Heather Spencer
2010-09-01
Complex diseases like cancers can often be classified into subtypes using various pathological and molecular traits of the disease. In this article, we develop methods for analysis of disease incidence in cohort studies incorporating data on multiple disease traits using a two-stage semiparametric Cox proportional hazards regression model that allows one to examine the heterogeneity in the effect of the covariates by the levels of the different disease traits. For inference in the presence of missing disease traits, we propose a generalization of an estimating equation approach for handling missing cause of failure in competing-risk data. We prove asymptotic unbiasedness of the estimating equation method under a general missing-at-random assumption and propose a novel influence-function-based sandwich variance estimator. The methods are illustrated using simulation studies and a real data application involving the Cancer Prevention Study II nutrition cohort.
NASA Astrophysics Data System (ADS)
Kwon, Young-Sam; Li, Fucai
2018-03-01
In this paper we study the incompressible limit of the degenerate quantum compressible Navier-Stokes equations in a periodic domain T3 and the whole space R3 with general initial data. In the periodic case, by applying the refined relative entropy method and carrying out the detailed analysis on the oscillations of velocity, we prove rigorously that the gradient part of the weak solutions (velocity) of the degenerate quantum compressible Navier-Stokes equations converge to the strong solution of the incompressible Navier-Stokes equations. Our results improve considerably the ones obtained by Yang, Ju and Yang [25] where only the well-prepared initial data case is considered. While for the whole space case, thanks to the Strichartz's estimates of linear wave equations, we can obtain the convergence of the weak solutions of the degenerate quantum compressible Navier-Stokes equations to the strong solution of the incompressible Navier-Stokes/Euler equations with a linear damping term. Moreover, the convergence rates are also given.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Ries, Kernell G.; Crouse, Michele Y.
2002-01-01
For many years, the U.S. Geological Survey (USGS) has been developing regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, these equations have been developed on a Statewide or metropolitan-area basis as part of cooperative study programs with specific State Departments of Transportation. In 1994, the USGS released a computer program titled the National Flood Frequency Program (NFF), which compiled all the USGS available regression equations for estimating the magnitude and frequency of floods in the United States and Puerto Rico. NFF was developed in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency. Since the initial release of NFF, the USGS has produced new equations for many areas of the Nation. A new version of NFF has been developed that incorporates these new equations and provides additional functionality and ease of use. NFF version 3 provides regression-equation estimates of flood-peak discharges for unregulated rural and urban watersheds, flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals. The Program also provides weighting techniques to improve estimates of flood-peak discharges for gaging stations and ungaged sites. The information provided by NFF should be useful to engineers and hydrologists for planning and design applications. This report describes the flood-regionalization techniques used in NFF and provides guidance on the applicability and limitations of the techniques. The NFF software and the documentation for the regression equations included in NFF are available at http://water.usgs.gov/software/nff.html.
ERIC Educational Resources Information Center
Furlow, Carolyn F.; Beretvas, S. Natasha
2005-01-01
Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for…
On the validity of the Arrhenius equation for electron attachment rate coefficients.
Fabrikant, Ilya I; Hotop, Hartmut
2008-03-28
The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.
Effective quadrature formula in solving linear integro-differential equations of order two
NASA Astrophysics Data System (ADS)
Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.
2017-08-01
In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.
Murray, Aja Louise; Booth, Tom; Eisner, Manuel; Obsuth, Ingrid; Ribeaud, Denis
2018-05-22
Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).
GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data
NASA Astrophysics Data System (ADS)
Ibrahim, Noor Akma; Suliadi
2010-11-01
In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.
General Economic and Demographic Background and Projections for Indiana Library Services.
ERIC Educational Resources Information Center
Foust, James D.; Tower, Carl B.
Before future library needs can be estimated, economic and demographic variables that influence the demand for library services must be projected and estimating equations relating library needs to economic and demographic parameters developed. This study considers the size, location and age-sex characteristics of Indiana's current population and…
Estimation of flood-frequency characteristics of small urban streams in North Carolina
Robbins, J.C.; Pope, B.F.
1996-01-01
A statewide study was conducted to develop methods for estimating the magnitude and frequency of floods of small urban streams in North Carolina. This type of information is critical in the design of bridges, culverts and water-control structures, establishment of flood-insurance rates and flood-plain regulation, and for other uses by urban planners and engineers. Concurrent records of rainfall and runoff data collected in small urban basins were used to calibrate rainfall-runoff models. Historic rain- fall records were used with the calibrated models to synthesize a long- term record of annual peak discharges. The synthesized record of annual peak discharges were used in a statistical analysis to determine flood- frequency distributions. These frequency distributions were used with distributions from previous investigations to develop a database for 32 small urban basins in the Blue Ridge-Piedmont, Sand Hills, and Coastal Plain hydrologic areas. The study basins ranged in size from 0.04 to 41.0 square miles. Data describing the size and shape of the basin, level of urban development, and climate and rural flood charac- teristics also were included in the database. Estimation equations were developed by relating flood-frequency char- acteristics to basin characteristics in a generalized least-squares regression analysis. The most significant basin characteristics are drainage area, impervious area, and rural flood discharge. The model error and prediction errors for the estimating equations were less than those for the national flood-frequency equations previously reported. Resulting equations, which have prediction errors generally less than 40 percent, can be used to estimate flood-peak discharges for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals for small urban basins across the State assuming negligible, sustainable, in- channel detention or basin storage.
Gotvald, Anthony J.; Barth, Nancy A.; Veilleux, Andrea G.; Parrett, Charles
2012-01-01
Methods for estimating the magnitude and frequency of floods in California that are not substantially affected by regulation or diversions have been updated. Annual peak-flow data through water year 2006 were analyzed for 771 streamflow-gaging stations (streamgages) in California having 10 or more years of data. Flood-frequency estimates were computed for the streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Low-outlier and historic information were incorporated into the flood-frequency analysis, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low outliers. Special methods for fitting the distribution were developed for streamgages in the desert region in southeastern California. Additionally, basin characteristics for the streamgages were computed by using a geographical information system. Regional regression analysis, using generalized least squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins in California that are outside of the southeastern desert region. Flood-frequency estimates and basin characteristics for 630 streamgages were combined to form the final database used in the regional regression analysis. Five hydrologic regions were developed for the area of California outside of the desert region. The final regional regression equations are functions of drainage area and mean annual precipitation for four of the five regions. In one region, the Sierra Nevada region, the final equations are functions of drainage area, mean basin elevation, and mean annual precipitation. Average standard errors of prediction for the regression equations in all five regions range from 42.7 to 161.9 percent. For the desert region of California, an analysis of 33 streamgages was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the log-Pearson Type III distribution. The regional estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final regional regression equations are functions of drainage area. Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent. Annual peak-flow data through water year 2006 were analyzed for eight streamgages in California having 10 or more years of data considered to be affected by urbanization. Flood-frequency estimates were computed for the urban streamgages by fitting a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Regression analysis could not be used to develop flood-frequency estimation equations for urban streams because of the limited number of sites. Flood-frequency estimates for the eight urban sites were graphically compared to flood-frequency estimates for 630 non-urban sites. The regression equations developed from this study will be incorporated into the U.S. Geological Survey (USGS) StreamStats program. The StreamStats program is a Web-based application that provides streamflow statistics and basin characteristics for USGS streamgages and ungaged sites of interest. StreamStats can also compute basin characteristics and provide estimates of streamflow statistics for ungaged sites when users select the location of a site along any stream in California.
Zuo, Shu-di; Ren, Yin; Weng, Xian; Ding, Hong-feng; Luo, Yun-jian
2015-02-01
Biomass allometric equation (BAE) considered as a simple and reliable method in the estimation of forest biomass and carbon was used widely. In China, numerous studies focused on the BAEs for coniferous forest and pure broadleaved forest, and generalized BAEs were frequently used to estimate the biomass and carbon of mixed broadleaved forest, although they could induce large uncertainty in the estimates. In this study, we developed the species-specific and generalized BAEs using biomass measurement for 9 common broadleaved trees (Castanopsis fargesii, C. lamontii, C. tibetana, Lithocarpus glaber, Sloanea sinensis, Daphniphyllum oldhami, Alniphyllum fortunei, Manglietia yuyuanensis, and Engelhardtia fenzlii) of subtropical evergreen broadleaved forest, and compared differences in species-specific and generalized BAEs. The results showed that D (diameter at breast height) was a better independent variable in estimating the biomass of branch, leaf, root, aboveground section and total tree than a combined variable (D2 H) of D and H (tree height) , but D2H was better than D in estimating stem biomass. R2 (coefficient of determination) values of BAEs for 6 species decreased when adding H as the second independent variable into D- only BAEs, where R2 value for S. sinensis decreased by 5.6%. Compared with generalized D- and D2H-based BAEs, standard errors of estimate (SEE) of BAEs for 8 tree species decreased, and similar decreasing trend was observed for different components, where SEEs of the branch decreased by 13.0% and 20.3%. Therefore, the biomass carbon storage and its dynamic estimates were influenced largely by tree species and model types. In order to improve the accuracy of the estimates of biomass and carbon, we should consider the differences in tree species and model types.
Banks, H Thomas; Robbins, Danielle; Sutton, Karyn L
2013-01-01
In this paper we present new results for differentiability of delay systems with respect to initial conditions and delays. After motivating our results with a wide range of delay examples arising in biology applications, we further note the need for sensitivity functions (both traditional and generalized sensitivity functions), especially in control and estimation problems. We summarize general existence and uniqueness results before turning to our main results on differentiation with respect to delays, etc. Finally we discuss use of our results in the context of estimation problems.
NASA Astrophysics Data System (ADS)
O, Hyong-Chol; Jo, Jong-Jun; Kim, Ji-Sok
2016-02-01
We provide representations of solutions to terminal value problems of inhomogeneous Black-Scholes equations and study such general properties as min-max estimates, gradient estimates, monotonicity and convexity of the solutions with respect to the stock price variable, which are important for financial security pricing. In particular, we focus on finding representation of the gradient (with respect to the stock price variable) of solutions to the terminal value problems with discontinuous terminal payoffs or inhomogeneous terms. Such terminal value problems are often encountered in pricing problems of compound-like options such as Bermudan options or defaultable bonds with discrete default barrier, default intensity and endogenous default recovery. Our results can be used in pricing real defaultable bonds under consideration of existence of discrete coupons or taxes on coupons.
Estimation of GFR in South Asians: A Study From the General Population in Pakistan
Jessani, Saleem; Levey, Andrew S.; Bux, Rasool; Inker, Lesley A.; Islam, Muhammad; Chaturvedi, Nish; Mariat, Christophe; Schmid, Christopher H.; Jafar, Tazeen H.
2015-01-01
Background South Asians are at high risk for chronic kidney disease. However, unlike those in the United States and United Kingdom, laboratories in South Asian countries do not routinely report estimated glomerular filtration rate (eGFR) when serum creatinine is measured. The objectives of the study were to: (1) evaluate the performance of existing GFR estimating equations in South Asians, and (2) modify the existing equations or develop a new equation for use in this population. Study Design Cross-sectional population-based study. Setting & Participants 581 participants 40 years or older were enrolled from 10 randomly selected communities and renal clinics in Karachi. Predictors eGFR, age, sex, serum creatinine level. Outcomes Bias (the median difference between measured GFR [mGFR] and eGFR), precision (the IQR of the difference), accuracy (P30; percentage of participants with eGFR within 30% of mGFR), and the root mean squared error reported as cross-validated estimates along with bootstrapped 95% CIs based on 1,000 replications. Results The CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) creatinine equation performed better than the MDRD (Modification of Diet in Renal Disease) Study equation in terms of greater accuracy at P30 (76.1% [95% CI, 72.7%–79.5%] vs 68.0% [95% CI, 64.3%–71.7%]; P <0.001) and improved precision (IQR, 22.6 [95% CI, 19.9–25.3] vs 28.6 [95% CI, 25.8–31.5] mL/min/1.73 m2; P < 0.001). However, both equations overestimated mGFR. Applying modification factors for slope and intercept to the CKD-EPI equation to create a CKD-EPI Pakistan equation (such that eGFRCKD-EPI(PK) = 0.686 × eGFRCKD-EPI1.059) in order to eliminate bias improved accuracy (P30, 81.6% [95% CI, 78.4%–84.8%]; P < 0.001) comparably to new estimating equations developed using creatinine level and additional variables. Limitations Lack of external validation data set and few participants with low GFR. Conclusions The CKD-EPI creatinine equation is more accurate and precise than the MDRD Study equation in estimating GFR in a South Asian population in Karachi. The CKD-EPI Pakistan equation further improves the performance of the CKD-EPI equation in South Asians and could be used for eGFR reporting. PMID:24074822
Shoemaker, W. Barclay; Sumner, D.M.
2006-01-01
Corrections can be used to estimate actual wetland evapotranspiration (AET) from potential evapotranspiration (PET) as a means to define the hydrology of wetland areas. Many alternate parameterizations for correction coefficients for three PET equations are presented, covering a wide range of possible data-availability scenarios. At nine sites in the wetland Everglades of south Florida, USA, the relatively complex PET Penman equation was corrected to daily total AET with smaller standard errors than the PET simple and Priestley-Taylor equations. The simpler equations, however, required less data (and thus less funding for instrumentation), with the possibility of being corrected to AET with slightly larger, comparable, or even smaller standard errors. Air temperature generally corrected PET simple most effectively to wetland AET, while wetland stage and humidity generally corrected PET Priestley-Taylor and Penman most effectively to wetland AET. Stage was identified for PET Priestley-Taylor and Penman as the data type with the most correction ability at sites that are dry part of each year or dry part of some years. Finally, although surface water generally was readily available at each monitoring site, AET was not occurring at potential rates, as conceptually expected under well-watered conditions. Apparently, factors other than water availability, such as atmospheric and stomata resistances to vapor transport, also were limiting the PET rate. ?? 2006, The Society of Wetland Scientists.
NASA Astrophysics Data System (ADS)
Chu, Weiqi; Li, Xiantao
2018-01-01
We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.
Estimating Flow-Duration and Low-Flow Frequency Statistics for Unregulated Streams in Oregon
Risley, John; Stonewall, Adam J.; Haluska, Tana
2008-01-01
Flow statistical datasets, basin-characteristic datasets, and regression equations were developed to provide decision makers with surface-water information needed for activities such as water-quality regulation, water-rights adjudication, biological habitat assessment, infrastructure design, and water-supply planning and management. The flow statistics, which included annual and monthly period of record flow durations (5th, 10th, 25th, 50th, and 95th percent exceedances) and annual and monthly 7-day, 10-year (7Q10) and 7-day, 2-year (7Q2) low flows, were computed at 466 streamflow-gaging stations at sites with unregulated flow conditions throughout Oregon and adjacent areas of neighboring States. Regression equations, created from the flow statistics and basin characteristics of the stations, can be used to estimate flow statistics at ungaged stream sites in Oregon. The study area was divided into 10 regression modeling regions based on ecological, topographic, geologic, hydrologic, and climatic criteria. In total, 910 annual and monthly regression equations were created to predict the 7 flow statistics in the 10 regions. Equations to predict the five flow-duration exceedance percentages and the two low-flow frequency statistics were created with Ordinary Least Squares and Generalized Least Squares regression, respectively. The standard errors of estimate of the equations created to predict the 5th and 95th percent exceedances had medians of 42.4 and 64.4 percent, respectively. The standard errors of prediction of the equations created to predict the 7Q2 and 7Q10 low-flow statistics had medians of 51.7 and 61.2 percent, respectively. Standard errors for regression equations for sites in western Oregon were smaller than those in eastern Oregon partly because of a greater density of available streamflow-gaging stations in western Oregon than eastern Oregon. High-flow regression equations (such as the 5th and 10th percent exceedances) also generally were more accurate than the low-flow regression equations (such as the 95th percent exceedance and 7Q10 low-flow statistic). The regression equations predict unregulated flow conditions in Oregon. Flow estimates need to be adjusted if they are used at ungaged sites that are regulated by reservoirs or affected by water-supply and agricultural withdrawals if actual flow conditions are of interest. The regression equations are installed in the USGS StreamStats Web-based tool (http://water.usgs.gov/osw/streamstats/index.html, accessed July 16, 2008). StreamStats provides users with a set of annual and monthly flow-duration and low-flow frequency estimates for ungaged sites in Oregon in addition to the basin characteristics for the sites. Prediction intervals at the 90-percent confidence level also are automatically computed.
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
Generalized equation of state for refrigerants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y.; Sonntag, R.E.; Borgnakke, C.
1995-08-01
A new four-parameter generalized equation of state with three reference fluids has been developed for predicting thermodynamic properties of the methane and ethane-series refrigerants. The four chosen characteristic parameters are critical temperature, critical pressure, acentric factor, and the polarity factor proposed in this work. The three selected reference fluids are argon, n-butane and 1,1-difluoroethane (R-152a). When the results of this work are compared with the refrigerant experimental data, they show significant improvement over Lee and Kesler (1975) and Wu and Stiel (1985). If the characteristic parameters of the refrigerants of interest are not available, an estimation method based on themore » group contribution method is given. The ideal vapor-compression refrigeration cycle was studied using the newly developed generalized equation of state to verify the accuracy of this work.« less
Curran, Janet H.; Meyer, David F.; Tasker, Gary D.
2003-01-01
Estimates of the magnitude and frequency of peak streamflow are needed across Alaska for floodplain management, cost-effective design of floodway structures such as bridges and culverts, and other water-resource management issues. Peak-streamflow magnitudes for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were computed for 301 streamflow-gaging and partial-record stations in Alaska and 60 stations in conterminous basins of Canada. Flows were analyzed from data through the 1999 water year using a log-Pearson Type III analysis. The State was divided into seven hydrologically distinct streamflow analysis regions for this analysis, in conjunction with a concurrent study of low and high flows. New generalized skew coefficients were developed for each region using station skew coefficients for stations with at least 25 years of systematic peak-streamflow data. Equations for estimating peak streamflows at ungaged locations were developed for Alaska and conterminous basins in Canada using a generalized least-squares regression model. A set of predictive equations for estimating the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak streamflows was developed for each streamflow analysis region from peak-streamflow magnitudes and physical and climatic basin characteristics. These equations may be used for unregulated streams without flow diversions, dams, periodically releasing glacial impoundments, or other streamflow conditions not correlated to basin characteristics. Basin characteristics should be obtained using methods similar to those used in this report to preserve the statistical integrity of the equations.
Grima, Ramon
2011-11-01
The mesoscopic description of chemical kinetics, the chemical master equation, can be exactly solved in only a few simple cases. The analytical intractability stems from the discrete character of the equation, and hence considerable effort has been invested in the development of Fokker-Planck equations, second-order partial differential equation approximations to the master equation. We here consider two different types of higher-order partial differential approximations, one derived from the system-size expansion and the other from the Kramers-Moyal expansion, and derive the accuracy of their predictions for chemical reactive networks composed of arbitrary numbers of unimolecular and bimolecular reactions. In particular, we show that the partial differential equation approximation of order Q from the Kramers-Moyal expansion leads to estimates of the mean number of molecules accurate to order Ω(-(2Q-3)/2), of the variance of the fluctuations in the number of molecules accurate to order Ω(-(2Q-5)/2), and of skewness accurate to order Ω(-(Q-2)). We also show that for large Q, the accuracy in the estimates can be matched only by a partial differential equation approximation from the system-size expansion of approximate order 2Q. Hence, we conclude that partial differential approximations based on the Kramers-Moyal expansion generally lead to considerably more accurate estimates in the mean, variance, and skewness than approximations of the same order derived from the system-size expansion.
Allometry of visceral organs in living amniotes and its implications for sauropod dinosaurs
Franz, Ragna; Hummel, Jürgen; Kienzle, Ellen; Kölle, Petra; Gunga, Hanns-Christian; Clauss, Marcus
2009-01-01
Allometric equations are often used to extrapolate traits in animals for which only body mass estimates are known, such as dinosaurs. One important decision can be whether these equations should be based on mammal, bird or reptile data. To address whether this choice will have a relevant influence on reconstructions, we compared allometric equations for birds and mammals from the literature to those for reptiles derived from both published and hitherto unpublished data. Organs studied included the heart, kidneys, liver and gut, as well as gut contents. While the available data indicate that gut content mass does not differ between the clades, the organ masses for reptiles are generally lower than those for mammals and birds. In particular, gut tissue mass is significantly lower in reptiles. When applying the results in the reconstruction of a sauropod dinosaur, the estimated volume of the coelomic cavity greatly exceeds the estimated volume of the combined organ masses, irrespective of the allometric equation used. Therefore, substantial deviation of sauropod organ allometry from that of the extant vertebrates can be allowed conceptually. Extrapolations of retention times from estimated gut contents mass and food intake do not suggest digestive constraints on sauropod dinosaur body size. PMID:19324837
Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.
2016-06-27
The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west-central Idaho (average standard error of prediction=46.4 percent; pseudo-R2>92 percent) and region 5 in central Idaho (average standard error of prediction=30.3 percent; pseudo-R2>95 percent). Regression model fit was poor for region 7 in southern Idaho (average standard error of prediction=103 percent; pseudo-R2<78 percent) compared to other regions because few streamgages in region 7 met the criteria for inclusion in the study, and the region’s semi-arid climate and associated variability in precipitation patterns causes substantial variability in peak flows.A drainage area ratio-adjustment method, using ratio exponents estimated using generalized least-squares regression, was presented as an alternative to the regional regression equations if peak-flow estimates are desired at an ungaged site that is close to a streamgage selected for inclusion in this study. The alternative drainage area ratio-adjustment method is appropriate for use when the drainage area ratio between the ungaged and gaged sites is between 0.5 and 1.5.The updated regional peak-flow regression equations had lower total error (standard error of prediction) than all regression equations presented in a 1982 study and in four of six regions presented in 2002 and 2003 studies in Idaho. A more extensive streamgage screening process used in the current study resulted in fewer streamgages used in the current study than in the 1982, 2002, and 2003 studies. Fewer streamgages used and the selection of different explanatory variables were likely causes of increased error in some regions compared to previous studies, but overall, regional peak‑flow regression model fit was generally improved for Idaho. The revised statistical procedures and increased streamgage screening applied in the current study most likely resulted in a more accurate representation of natural peak-flow conditions.The updated, regional peak-flow regression equations will be integrated in the U.S. Geological Survey StreamStats program to allow users to estimate basin and climatic characteristics and peak-flow statistics at ungaged locations of interest. StreamStats estimates peak-flow statistics with quantifiable certainty only when used at sites with basin and climatic characteristics within the range of input variables used to develop the regional regression equations. Both the regional regression equations and StreamStats should be used to estimate peak-flow statistics only in naturally flowing, relatively unregulated streams without substantial local influences to flow, such as large seeps, springs, or other groundwater-surface water interactions that are not widespread or characteristic of the respective region.
Predictive equations for the estimation of body size in seals and sea lions (Carnivora: Pinnipedia)
Churchill, Morgan; Clementz, Mark T; Kohno, Naoki
2014-01-01
Body size plays an important role in pinniped ecology and life history. However, body size data is often absent for historical, archaeological, and fossil specimens. To estimate the body size of pinnipeds (seals, sea lions, and walruses) for today and the past, we used 14 commonly preserved cranial measurements to develop sets of single variable and multivariate predictive equations for pinniped body mass and total length. Principal components analysis (PCA) was used to test whether separate family specific regressions were more appropriate than single predictive equations for Pinnipedia. The influence of phylogeny was tested with phylogenetic independent contrasts (PIC). The accuracy of these regressions was then assessed using a combination of coefficient of determination, percent prediction error, and standard error of estimation. Three different methods of multivariate analysis were examined: bidirectional stepwise model selection using Akaike information criteria; all-subsets model selection using Bayesian information criteria (BIC); and partial least squares regression. The PCA showed clear discrimination between Otariidae (fur seals and sea lions) and Phocidae (earless seals) for the 14 measurements, indicating the need for family-specific regression equations. The PIC analysis found that phylogeny had a minor influence on relationship between morphological variables and body size. The regressions for total length were more accurate than those for body mass, and equations specific to Otariidae were more accurate than those for Phocidae. Of the three multivariate methods, the all-subsets approach required the fewest number of variables to estimate body size accurately. We then used the single variable predictive equations and the all-subsets approach to estimate the body size of two recently extinct pinniped taxa, the Caribbean monk seal (Monachus tropicalis) and the Japanese sea lion (Zalophus japonicus). Body size estimates using single variable regressions generally under or over-estimated body size; however, the all-subset regression produced body size estimates that were close to historically recorded body length for these two species. This indicates that the all-subset regression equations developed in this study can estimate body size accurately. PMID:24916814
NASA Technical Reports Server (NTRS)
Young, D. P.; Woo, A. C.; Bussoletti, J. E.; Johnson, F. T.
1986-01-01
A general method is developed combining fast direct methods and boundary integral equation methods to solve Poisson's equation on irregular exterior regions. The method requires O(N log N) operations where N is the number of grid points. Error estimates are given that hold for regions with corners and other boundary irregularities. Computational results are given in the context of computational aerodynamics for a two-dimensional lifting airfoil. Solutions of boundary integral equations for lifting and nonlifting aerodynamic configurations using preconditioned conjugate gradient are examined for varying degrees of thinness.
ERIC Educational Resources Information Center
Wright, Bradford L.
1975-01-01
Advocates the creation of swimming pool oscillations as part of a general investigation of mechanical oscillations. Presents the equations, procedure for deriving the slosh modes, and methods of period estimation for exciting swimming pool oscillations. (GS)
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Techniques for estimating selected streamflow characteristics of rural unregulated streams in Ohio
Koltun, G.F.; Whitehead, Matthew T.
2002-01-01
This report provides equations for estimating mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and streamflow quartiles (the 25th-, 50th-, and 75th-percentile streamflows) as a function of selected basin characteristics for rural, unregulated streams in Ohio. The equations were developed from streamflow statistics and basin-characteristics data for as many as 219 active or discontinued streamflow-gaging stations on rural, unregulated streams in Ohio with 10 or more years of homogenous daily streamflow record. Streamflow statistics and basin-characteristics data for the 219 stations are presented in this report. Simple equations (based on drainage area only) and best-fit equations (based on drainage area and at least two other basin characteristics) were developed by means of ordinary least-squares regression techniques. Application of the best-fit equations generally involves quantification of basin characteristics that require or are facilitated by use of a geographic information system. In contrast, the simple equations can be used with information that can be obtained without use of a geographic information system; however, the simple equations have larger prediction errors than the best-fit equations and exhibit geographic biases for most streamflow statistics. The best-fit equations should be used instead of the simple equations whenever possible.
Valuing Informal Care Experience: Does Choice of Measure Matter?
ERIC Educational Resources Information Center
Mentzakis, Emmanouil; McNamee, Paul; Ryan, Mandy; Sutton, Matthew
2012-01-01
Well-being equations are often estimated to generate monetary values for non-marketed activities. In such studies, utility is often approximated by either life satisfaction or General Health Questionnaire scores. We estimate and compare monetary valuations of informal care for the first time in the UK employing both measures, using longitudinal…
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1980-01-01
A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.
Estimation of peak-discharge frequency of urban streams in Jefferson County, Kentucky
Martin, Gary R.; Ruhl, Kevin J.; Moore, Brian L.; Rose, Martin F.
1997-01-01
An investigation of flood-hydrograph characteristics for streams in urban Jefferson County, Kentucky, was made to obtain hydrologic information needed for waterresources management. Equations for estimating peak-discharge frequencies for ungaged streams in the county were developed by combining (1) long-term annual peakdischarge data and rainfall-runoff data collected from 1991 to 1995 in 13 urban basins and (2) long-term annual peak-discharge data in four rural basins located in hydrologically similar areas of neighboring counties. The basins ranged in size from 1.36 to 64.0 square miles. The U.S. Geological Survey Rainfall- Runoff Model (RRM) was calibrated for each of the urban basins. The calibrated models were used with long-term, historical rainfall and pan-evaporation data to simulate 79 years of annual peak-discharge data. Peak-discharge frequencies were estimated by fitting the logarithms of the annual peak discharges to a Pearson-Type III frequency distribution. The simulated peak-discharge frequencies were adjusted for improved reliability by application of bias-correction factors derived from peakdischarge frequencies based on local, observed annual peak discharges. The three-parameter and the preferred seven-parameter nationwide urban-peak-discharge regression equations previously developed by USGS investigators provided biased (high) estimates for the urban basins studied. Generalized-least-square regression procedures were used to relate peakdischarge frequency to selected basin characteristics. Regression equations were developed to estimate peak-discharge frequency by adjusting peak-dischargefrequency estimates made by use of the threeparameter nationwide urban regression equations. The regression equations are presented in equivalent forms as functions of contributing drainage area, main-channel slope, and basin development factor, which is an index for measuring the efficiency of the basin drainage system. Estimates of peak discharges for streams in the county can be made for the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals by use of the regression equations. The average standard errors of prediction of the regression equations ranges from ? 34 to ? 45 percent. The regression equations are applicable to ungaged streams in the county having a specific range of basin characteristics.
Jeong, Tae-Dong; Lee, Woochang; Chun, Sail; Lee, Sang Koo; Ryu, Jin-Sook; Min, Won-Ki; Park, Jung Sik
2013-01-01
We compared the accuracy of the Modification of Diet in Renal Disease (MDRD) study and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations in Korean patients and evaluated the difference in CKD prevalence determined using the two equations in the Korean general population. The accuracy of the two equations was evaluated in 607 patients who underwent a chromium-51-ethylenediaminetetraacetic acid GFR measurement. Additionally, we compared the difference in CKD prevalence determined by the two equations among 5,822 participants in the fifth Korea National Health and Nutrition Examination Survey, 2010. Among the 607 subjects, the median bias of the CKD-EPI equation was significantly lower than that of the MDRD study equation (0.9 vs. 2.2, p=0.020). The accuracy of the two equations was not significantly different in patients with mGFR <60 mL/min/1.73m(2); however, the accuracy of the CKD-EPI equation was significantly higher than that of the MDRD study equation in patients with GFR ≥60 mL/min/1.73m(2). The prevalences of the CKD stages 1, 2 and 3 in the Korean general population were 47.56, 49.23, and 3.07%, respectively, for the MDRD study equation; and were 68.48, 28.89, and 2.49%, respectively, for the CKD-EPI equation. These data suggest that the CKD-EPI equation might be more useful in clinical practice than the MDRD study equation in Koreans. © 2013 S. Karger AG, Basel.
Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals
Hodgkins, Glenn A.; Martin, Gary R.
2003-01-01
This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.
Analyzing average and conditional effects with multigroup multilevel structural equation models
Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf
2014-01-01
Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668
NASA Astrophysics Data System (ADS)
Akbar, M. S.; Setiawan; Suhartono; Ruchjana, B. N.; Riyadi, M. A. A.
2018-03-01
Ordinary Least Squares (OLS) is general method to estimates Generalized Space Time Autoregressive (GSTAR) parameters. But in some cases, the residuals of GSTAR are correlated between location. If OLS is applied to this case, then the estimators are inefficient. Generalized Least Squares (GLS) is a method used in Seemingly Unrelated Regression (SUR) model. This method estimated parameters of some models with residuals between equations are correlated. Simulation study shows that GSTAR with GLS method for estimating parameters (GSTAR-SUR) is more efficient than GSTAR-OLS method. The purpose of this research is to apply GSTAR-SUR with calendar variation and intervention as exogenous variable (GSTARX-SUR) for forecast outflow of currency in Java, Indonesia. As a result, GSTARX-SUR provides better performance than GSTARX-OLS.
The nonlinear modified equation approach to analyzing finite difference schemes
NASA Technical Reports Server (NTRS)
Klopfer, G. H.; Mcrae, D. S.
1981-01-01
The nonlinear modified equation approach is taken in this paper to analyze the generalized Lax-Wendroff explicit scheme approximation to the unsteady one- and two-dimensional equations of gas dynamics. Three important applications of the method are demonstrated. The nonlinear modified equation analysis is used to (1) generate higher order accurate schemes, (2) obtain more accurate estimates of the discretization error for nonlinear systems of partial differential equations, and (3) generate an adaptive mesh procedure for the unsteady gas dynamic equations. Results are obtained for all three areas. For the adaptive mesh procedure, mesh point requirements for equal resolution of discontinuities were reduced by a factor of five for a 1-D shock tube problem solved by the explicit MacCormack scheme.
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
Substrate inhibition kinetics of phenol biodegradation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goudar, C.T.; Ganji, S.H.; Pujar, B.G.
Phenol biodegradation was studied in batch experiments using an acclimated inoculum and initial phenol concentrations ranging from 0.1 to 1.3 g/L. Phenol depletion an associated microbial growth were monitored over time to provide information that was used to estimate the kinetics of phenol biodegradation. Phenol inhibited biodegradation at high concentrations, and a generalized substrate inhibition model based on statistical thermodynamics was used to describe the dynamics of microbial growth in phenol. For experimental data obtained in this study, the generalized substrate inhibition model reduced to a form that is analogous to the Andrews equation, and the biokinetic parameters {micro}{sub max},more » maximum specific growth; K{sub s}, saturation constant; and K{sub i}, inhibition constant were estimated as 0.251 h{sup {minus}1}, 0.011 g/L, and 0.348 g/L, respectively, using a nonlinear least squares technique. Given the wide variability in substrate inhibition models used to describe phenol biodegradation, an attempt was made to justify selection of particular model based on theoretical considerations. Phenol biodegradation data from nine previously published studies were used in the generalized substrate inhibition model to determine the appropriate form of the substrate inhibition model. In all nine cases, the generalized substrate inhibition model reduced to a form analogous to the Andrews equation suggesting the suitability of the Andrews equation to describe phenol biodegradation data.« less
Generalized Forchheimer Flows of Isentropic Gases
NASA Astrophysics Data System (ADS)
Celik, Emine; Hoang, Luan; Kieu, Thinh
2018-03-01
We consider generalized Forchheimer flows of either isentropic gases or slightly compressible fluids in porous media. By using Muskat's and Ward's general form of the Forchheimer equations, we describe the fluid dynamics by a doubly nonlinear parabolic equation for the appropriately defined pseudo-pressure. The volumetric flux boundary condition is converted to a time-dependent Robin-type boundary condition for this pseudo-pressure. We study the corresponding initial boundary value problem, and estimate the L^∞ and W^{1,2-a} (with 0
Body temperatures in dinosaurs: what can growth curves tell us?
Griebeler, Eva Maria
2013-01-01
To estimate the body temperature (BT) of seven dinosaurs Gillooly, Alleen, and Charnov (2006) used an equation that predicts BT from the body mass and maximum growth rate (MGR) with the latter preserved in ontogenetic growth trajectories (BT-equation). The results of these authors evidence inertial homeothermy in Dinosauria and suggest that, due to overheating, the maximum body size in Dinosauria was ultimately limited by BT. In this paper, I revisit this hypothesis of Gillooly, Alleen, and Charnov (2006). I first studied whether BTs derived from the BT-equation of today's crocodiles, birds and mammals are consistent with core temperatures of animals. Second, I applied the BT-equation to a larger number of dinosaurs than Gillooly, Alleen, and Charnov (2006) did. In particular, I estimated BT of Archaeopteryx (from two MGRs), ornithischians (two), theropods (three), prosauropods (three), and sauropods (nine). For extant species, the BT value estimated from the BT-equation was a poor estimate of an animal's core temperature. For birds, BT was always strongly overestimated and for crocodiles underestimated; for mammals the accuracy of BT was moderate. I argue that taxon-specific differences in the scaling of MGR (intercept and exponent of the regression line, log-log-transformed) and in the parameterization of the Arrhenius model both used in the BT-equation as well as ecological and evolutionary adaptations of species cause these inaccuracies. Irrespective of the found inaccuracy of BTs estimated from the BT-equation and contrary to the results of Gillooly, Alleen, and Charnov (2006) I found no increase in BT with increasing body mass across all dinosaurs (Sauropodomorpha, Sauropoda) studied. This observation questions that, due to overheating, the maximum size in Dinosauria was ultimately limited by BT. However, the general high inaccuracy of dinosaurian BTs derived from the BT-equation makes a reliable test of whether body size in dinosaurs was ultimately limited by overheating impossible.
Body Temperatures in Dinosaurs: What Can Growth Curves Tell Us?
Griebeler, Eva Maria
2013-01-01
To estimate the body temperature (BT) of seven dinosaurs Gillooly, Alleen, and Charnov (2006) used an equation that predicts BT from the body mass and maximum growth rate (MGR) with the latter preserved in ontogenetic growth trajectories (BT-equation). The results of these authors evidence inertial homeothermy in Dinosauria and suggest that, due to overheating, the maximum body size in Dinosauria was ultimately limited by BT. In this paper, I revisit this hypothesis of Gillooly, Alleen, and Charnov (2006). I first studied whether BTs derived from the BT-equation of today’s crocodiles, birds and mammals are consistent with core temperatures of animals. Second, I applied the BT-equation to a larger number of dinosaurs than Gillooly, Alleen, and Charnov (2006) did. In particular, I estimated BT of Archaeopteryx (from two MGRs), ornithischians (two), theropods (three), prosauropods (three), and sauropods (nine). For extant species, the BT value estimated from the BT-equation was a poor estimate of an animal’s core temperature. For birds, BT was always strongly overestimated and for crocodiles underestimated; for mammals the accuracy of BT was moderate. I argue that taxon-specific differences in the scaling of MGR (intercept and exponent of the regression line, log-log-transformed) and in the parameterization of the Arrhenius model both used in the BT-equation as well as ecological and evolutionary adaptations of species cause these inaccuracies. Irrespective of the found inaccuracy of BTs estimated from the BT-equation and contrary to the results of Gillooly, Alleen, and Charnov (2006) I found no increase in BT with increasing body mass across all dinosaurs (Sauropodomorpha, Sauropoda) studied. This observation questions that, due to overheating, the maximum size in Dinosauria was ultimately limited by BT. However, the general high inaccuracy of dinosaurian BTs derived from the BT-equation makes a reliable test of whether body size in dinosaurs was ultimately limited by overheating impossible. PMID:24204568
Frankenfield, David; Roth-Yousey, Lori; Compher, Charlene
2005-05-01
An assessment of energy needs is a necessary component in the development and evaluation of a nutrition care plan. The metabolic rate can be measured or estimated by equations, but estimation is by far the more common method. However, predictive equations might generate errors large enough to impact outcome. Therefore, a systematic review of the literature was undertaken to document the accuracy of predictive equations preliminary to deciding on the imperative to measure metabolic rate. As part of a larger project to determine the role of indirect calorimetry in clinical practice, an evidence team identified published articles that examined the validity of various predictive equations for resting metabolic rate (RMR) in nonobese and obese people and also in individuals of various ethnic and age groups. Articles were accepted based on defined criteria and abstracted using evidence analysis tools developed by the American Dietetic Association. Because these equations are applied by dietetics practitioners to individuals, a key inclusion criterion was research reports of individual data. The evidence was systematically evaluated, and a conclusion statement and grade were developed. Four prediction equations were identified as the most commonly used in clinical practice (Harris-Benedict, Mifflin-St Jeor, Owen, and World Health Organization/Food and Agriculture Organization/United Nations University [WHO/FAO/UNU]). Of these equations, the Mifflin-St Jeor equation was the most reliable, predicting RMR within 10% of measured in more nonobese and obese individuals than any other equation, and it also had the narrowest error range. No validation work concentrating on individual errors was found for the WHO/FAO/UNU equation. Older adults and US-residing ethnic minorities were underrepresented both in the development of predictive equations and in validation studies. The Mifflin-St Jeor equation is more likely than the other equations tested to estimate RMR to within 10% of that measured, but noteworthy errors and limitations exist when it is applied to individuals and possibly when it is generalized to certain age and ethnic groups. RMR estimation errors would be eliminated by valid measurement of RMR with indirect calorimetry, using an evidence-based protocol to minimize measurement error. The Expert Panel advises clinical judgment regarding when to accept estimated RMR using predictive equations in any given individual. Indirect calorimetry may be an important tool when, in the judgment of the clinician, the predictive methods fail an individual in a clinically relevant way. For members of groups that are greatly underrepresented by existing validation studies of predictive equations, a high level of suspicion regarding the accuracy of the equations is warranted.
Asymptotics for Large Time of Global Solutions to the Generalized Kadomtsev-Petviashvili Equation
NASA Astrophysics Data System (ADS)
Hayashi, Nakao; Naumkin, Pavel I.; Saut, Jean-Claude
We study the large time asymptotic behavior of solutions to the generalized Kadomtsev-Petviashvili (KP) equations
Estimates of the seasonal mean vertical velocity fields of the extratropical Northern Hemisphere
NASA Technical Reports Server (NTRS)
White, G. H.
1983-01-01
Indirect methods are employed to estimate the wintertime and summertime mean vertical velocity fields of the extratropical Northern Hemisphere and intercomparisons are made, together with comparisons with mean seasonal patterns of cloudiness and precipitation. Twice-daily NMC operational analyses produced general circulation statistics for 11 winters and 12 summers, permitting calculation of the seasonal NMC averages for 6 hr forecasts, solution of the omega equation, integration of continuity equation downward from 100 mb, and solution of the thermodynamic energy equation in the absence of diabatic heating. The methods all yielded similar vertical velocity patterns; however, the magnitude of the vertical velocities could not be calculated with great accuracy. Orography was concluded to have less of an effect in summer than in winter, when winds are stronger.
Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Marcus, S. I.
1975-01-01
The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.
The pointwise estimates of diffusion wave of the compressible micropolar fluids
NASA Astrophysics Data System (ADS)
Wu, Zhigang; Wang, Weike
2018-09-01
The pointwise estimates for the compressible micropolar fluids in dimension three are given, which exhibit generalized Huygens' principle for the fluid density and fluid momentum as the compressible Navier-Stokes equation, while the micro-rational momentum behaves like the fluid momentum of the Euler equation with damping. To circumvent the complexity from 7 × 7 Green's matrix, we use the decomposition of fluid part and electromagnetic part for the momentums to study three smaller Green's matrices. The following from this decomposition is that we have to deal with the new problem that the nonlinear terms contain nonlocal operators. We solve it by using the natural match of these new Green's functions and the nonlinear terms. Moreover, to derive the different pointwise estimates for different unknown variables such that the estimate of each unknown variable is in agreement with its Green's function, we develop some new estimates on the nonlinear interplay between different waves.
Basin Scale Estimates of Evapotranspiration Using GRACE and other Observations
NASA Technical Reports Server (NTRS)
Rodell, M.; Famiglietti, J. S.; Chen, J.; Seneviratne, S. I.; Viterbo, P.; Holl, S.; Wilson, C. R.
2004-01-01
Evapotranspiration is integral to studies of the Earth system, yet it is difficult to measure on regional scales. One estimation technique is a terrestrial water budget, i.e., total precipitation minus the sum of evapotranspiration and net runoff equals the change in water storage. Gravity Recovery and Climate Experiment (GRACE) satellite gravity observations are now enabling closure of this equation by providing the terrestrial water storage change. Equations are presented here for estimating evapotranspiration using observation based information, taking into account the unique nature of GRACE observations. GRACE water storage changes are first substantiated by comparing with results from a land surface model and a combined atmospheric-terrestrial water budget approach. Evapotranspiration is then estimated for 14 time periods over the Mississippi River basin and compared with output from three modeling systems. The GRACE estimates generally lay in the middle of the models and may provide skill in evaluating modeled evapotranspiration.
Pediatric GFR Estimating Equations Applied to Adolescents in the General Population
Neu, Alicia M.; Schwartz, George J.; Furth, Susan L.
2011-01-01
Summary Background and objectives We examined the distribution of estimated GFR (eGFR) in a healthy cohort of adolescents to inform clinical and research use. Design, setting, participants, & measurements Various creatinine-based (n = 3256) and/or cystatin C–based (n = 811) equations, including the recently developed complete and bedside equations from the Chronic Kidney Disease in Children (CKiD) study, were applied to U.S. adolescents 12 to 17 years of age participating in the 1999–2002 National Health and Nutrition Examination Survey (NHANES). Results The median serum creatinine and cystatin C were 0.7 mg/dl and 0.83 mg/L, respectively. The distribution of eGFR varied widely, with the median GFR ranging from a low of 96.6 ml/min per 1.73 m2 (CKiD) to a high of 140.0 ml/min per 1.73 m2 (original Schwartz). The proportions of participants with eGFRs <75 ml/min per 1.73 m2 are as follows: bedside CKiD 8.9%, Counahan 6.3%, Leger 0.4%, original Schwartz 0%, Filler 1.3%, Grubb 3.1%, Bouvet 2.5%, CKiD 1.8%, and Zappitelli 5.6%. By any equation examined, no group of participants with eGFR ≤10th percentile had an increased prevalence of comorbid conditions consistent with a low measured GFR. Conclusions Most pediatric-specific GFR estimating equations resulted in 25% to 50% of the participants having an eGFR <100 ml/min per 1.73 m2. However, participants with eGFR in the lower ranges did not have an increased prevalence of morbidities associated with chronic kidney disease. Clinical validation of creatinine- or cystatin C–based estimated GFRs in healthy children is needed before it is possible to screen the general population for chronic kidney disease. PMID:21566103
NASA Astrophysics Data System (ADS)
Pietri, A.; Capet, X.; d'Ovidio, F.; Le Sommer, J.; Molines, J. M.; Doglioli, A. M.
2016-02-01
Vertical velocities (w) associated with meso and submesoscale processes play an essential role in ocean dynamics and physical-biological coupling due to their impact on the upper ocean vertical exchanges. However, their small intensity (O 1 cm/s) compared to horizontal motions and their important variability in space and time makes them very difficult to measure. Estimations of these velocities are thus usually inferred using a generalized approach based on frontogenesis theories. These estimations are often obtained by solving the diagnostic omega equation. This equation can be expressed in different forms from a simple quasi geostrophic formulation to more complex ones that take into account the ageostrophic advection and the turbulent fluxes. The choice of the method used generally depends on the data available and on the dominant processes in the region of study. Here we aim to provide a statistically robust evaluation of the scales at which the vertical velocity can be resolved with confidence depending on the formulation of the equation and the dynamics of the flow. A high resolution simulation (dx=1-1.5 km) of the North Atlantic was used to compare the calculations of w based on the omega equation to the modelled vertical velocity. The simulation encompasses regions with different atmospheric forcings, mesoscale activity, seasonality and energetic flows, allowing us to explore several different dynamical contexts. In a few years the SWOT mission will provide bi-dimensional images of sea level elevation at a significantly higher resolution than available today. This work helps assess the possible contribution of the SWOT data to the understanding of the submesoscale circulation and the associated vertical fluxes in the upper ocean.
An estimating equation approach to dimension reduction for longitudinal data
Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li
2016-01-01
Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956
Modeling and Optimization for Morphing Wing Concept Generation
NASA Technical Reports Server (NTRS)
Skillen, Michael D.; Crossley, William A.
2007-01-01
This report consists of two major parts: 1) the approach to develop morphing wing weight equations, and 2) the approach to size morphing aircraft. Combined, these techniques allow the morphing aircraft to be sized with estimates of the morphing wing weight that are more credible than estimates currently available; aircraft sizing results prior to this study incorporated morphing wing weight estimates based on general heuristics for fixed-wing flaps (a comparable "morphing" component) but, in general, these results were unsubstantiated. This report will show that the method of morphing wing weight prediction does, in fact, drive the aircraft sizing code to different results and that accurate morphing wing weight estimates are essential to credible aircraft sizing results.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1982-01-01
Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.
Rackauckas, Christopher; Nie, Qing
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.
Rackauckas, Christopher
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. PMID:29527134
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Idso, S. B.; Reginato, R. J.
1986-01-01
Accurate estimates of evaporation over field-scale or larger areas are needed in hydrologic studies, irrigation scheduling, and meteorology. Remotely sensed surface temperature might be used in a model to calculate evaporation. A resistance-energy balance model, which combines an energy balance equation, the Penman-Monteith (1981) evaporation equation, and van den Honert's (1948) equation for water extraction by plant roots, is analyzed for estimating daily evaporation from wheat using postnoon canopy temperature measurements. Additional data requirements are half-hourly averages of solar radiation, air and dew point temperatures, and wind speed, along with reasonable estimates of canopy emissivity, albedo, height, and leaf area index. Evaporation fluxes were measured in the field by precision weighing lysimeters for well-watered and water-stressed wheat. Errors in computed daily evaporation were generally less than 10 percent, while errors in cumulative evaporation for 10 clear sky days were less than 5 percent for both well-watered and water-stressed wheat. Some results from sensitivity analysis of the model are also given.
NASA Astrophysics Data System (ADS)
Kazeykina, Anna; Muñoz, Claudio
2018-04-01
We continue our study on the Cauchy problem for the two-dimensional Novikov-Veselov (NV) equation, integrable via the inverse scattering transform for the two dimensional Schrödinger operator at a fixed energy parameter. This work is concerned with the more involved case of a positive energy parameter. For the solution of the linearized equation we derive smoothing and Strichartz estimates by combining new estimates for two different frequency regimes, extending our previous results for the negative energy case [18]. The low frequency regime, which our previous result was not able to treat, is studied in detail. At non-low frequencies we also derive improved smoothing estimates with gain of almost one derivative. Then we combine the linear estimates with a Fourier decomposition method and Xs,b spaces to obtain local well-posedness of NV at positive energy in Hs, s > 1/2. Our result implies, in particular, that at least for s > 1/2, NV does not change its behavior from semilinear to quasilinear as energy changes sign, in contrast to the closely related Kadomtsev-Petviashvili equations. As a complement to our LWP results, we also provide some new explicit solutions of NV at zero energy, generalizations of the lumps solutions, which exhibit new and nonstandard long time behavior. In particular, these solutions blow up in infinite time in L2.
Mathematical and computational studies of equilibrium capillary free surfaces
NASA Technical Reports Server (NTRS)
Albright, N.; Chen, N. F.; Concus, P.; Finn, R.
1977-01-01
The results of several independent studies are presented. The general question is considered of whether a wetting liquid always rises higher in a small capillary tube than in a larger one, when both are dipped vertically into an infinite reservoir. An analytical investigation is initiated to determine the qualitative behavior of the family of solutions of the equilibrium capillary free-surface equation that correspond to rotationally symmetric pendent liquid drops and the relationship of these solutions to the singular solution, which corresponds to an infinite spike of liquid extending downward to infinity. The block successive overrelaxation-Newton method and the generalized conjugate gradient method are investigated for solving the capillary equation on a uniform square mesh in a square domain, including the case for which the solution is unbounded at the corners. Capillary surfaces are calculated on the ellipse, on a circle with reentrant notches, and on other irregularly shaped domains using JASON, a general purpose program for solving nonlinear elliptic equations on a nonuniform quadrilaterial mesh. Analytical estimates for the nonexistence of solutions of the equilibrium capillary free-surface equation on the ellipse in zero gravity are evaluated.
Estimated Perennial Streams of Idaho and Related Geospatial Datasets
Rea, Alan; Skinner, Kenneth D.
2009-01-01
The perennial or intermittent status of a stream has bearing on many regulatory requirements. Because of changing technologies over time, cartographic representation of perennial/intermittent status of streams on U.S. Geological Survey (USGS) topographic maps is not always accurate and (or) consistent from one map sheet to another. Idaho Administrative Code defines an intermittent stream as one having a 7-day, 2-year low flow (7Q2) less than 0.1 cubic feet per second. To establish consistency with the Idaho Administrative Code, the USGS developed regional regression equations for Idaho streams for several low-flow statistics, including 7Q2. Using these regression equations, the 7Q2 streamflow may be estimated for naturally flowing streams anywhere in Idaho to help determine perennial/intermittent status of streams. Using these equations in conjunction with a Geographic Information System (GIS) technique known as weighted flow accumulation allows for an automated and continuous estimation of 7Q2 streamflow at all points along a stream, which in turn can be used to determine if a stream is intermittent or perennial according to the Idaho Administrative Code operational definition. The selected regression equations were applied to create continuous grids of 7Q2 estimates for the eight low-flow regression regions of Idaho. By applying the 0.1 ft3/s criterion, the perennial streams have been estimated in each low-flow region. Uncertainty in the estimates is shown by identifying a 'transitional' zone, corresponding to flow estimates of 0.1 ft3/s plus and minus one standard error. Considerable additional uncertainty exists in the model of perennial streams presented in this report. The regression models provide overall estimates based on general trends within each regression region. These models do not include local factors such as a large spring or a losing reach that may greatly affect flows at any given point. Site-specific flow data, assuming a sufficient period of record, generally would be considered to represent flow conditions better at a given site than flow estimates based on regionalized regression models. The geospatial datasets of modeled perennial streams are considered a first-cut estimate, and should not be construed to override site-specific flow data.
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
A generalized estimating equations approach for resting-state functional MRI group analysis.
D'Angelo, Gina M; Lazar, Nicole A; Eddy, William F; Morris, John C; Sheline, Yvette I
2011-01-01
An Alzheimer's fMRI study has motivated us to evaluate inter-regional correlations between groups. The overall objective is to assess inter-regional correlations at a resting-state with no stimulus or task. We propose using a generalized estimating equation (GEE) transition model and a GEE marginal model to model the within-subject correlation for each region. Residuals calculated from the GEE models are used to correlate brain regions and assess between group differences. The standard pooling approach of group averages of the Fisher-z transformation assuming temporal independence is a typical approach used to compare group correlations. The GEE approaches and standard Fisher-z pooling approach are demonstrated with an Alzheimer's disease (AD) connectivity study in a population of AD subjects and healthy control subjects. We also compare these methods using simulation studies and show that the transition model may have better statistical properties.
Conservation laws with coinciding smooth solutions but different conserved variables
NASA Astrophysics Data System (ADS)
Colombo, Rinaldo M.; Guerra, Graziano
2018-04-01
Consider two hyperbolic systems of conservation laws in one space dimension with the same eigenvalues and (right) eigenvectors. We prove that solutions to Cauchy problems with the same initial data differ at third order in the total variation of the initial datum. As a first application, relying on the classical Glimm-Lax result (Glimm and Lax in Decay of solutions of systems of nonlinear hyperbolic conservation laws. Memoirs of the American Mathematical Society, No. 101. American Mathematical Society, Providence, 1970), we obtain estimates improving those in Saint-Raymond (Arch Ration Mech Anal 155(3):171-199, 2000) on the distance between solutions to the isentropic and non-isentropic inviscid compressible Euler equations, under general equations of state. Further applications are to the general scalar case, where rather precise estimates are obtained, to an approximation by Di Perna of the p-system and to a traffic model.
Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska
Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.
1999-01-01
Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more data from existing stations, probably would produce the greatest reduction in average sampling errors of the equations.
Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.
2016-09-06
Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.
Synthesizing Risk from Summary Evidence Across Multiple Risk Factors.
Shrier, Ian; Colditz, Graham A; Steele, Russell J
2018-07-01
Although meta-analyses provide summary effect estimates that help advise patient care, patients often want to compare their overall health to the general population. The Harvard Cancer Risk Index was published in 2004 and uses risk ratio estimates and prevalence estimates from original studies across many risk factors to provide an answer to this question. However, the published version of the formula only uses dichotomous risk factors and its derivation was not provided. The objective of this brief report was to provide the derivation of a more general form of the equation that allows the incorporation of risk factors with three or more levels.
ERIC Educational Resources Information Center
Olsson, Ulf Henning; Foss, Tron; Troye, Sigurd V.; Howell, Roy D.
2000-01-01
Used simulation to demonstrate how the choice of estimation method affects indexes of fit and parameter bias for different sample sizes when nested models vary in terms of specification error and the data demonstrate different levels of kurtosis. Discusses results for maximum likelihood (ML), generalized least squares (GLS), and weighted least…
An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-01-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360
Deletion Diagnostics for Alternating Logistic Regressions
Preisser, John S.; By, Kunthel; Perin, Jamie; Qaqish, Bahjat F.
2013-01-01
Deletion diagnostics are introduced for the regression analysis of clustered binary outcomes estimated with alternating logistic regressions, an implementation of generalized estimating equations (GEE) that estimates regression coefficients in a marginal mean model and in a model for the intracluster association given by the log odds ratio. The diagnostics are developed within an estimating equations framework that recasts the estimating functions for association parameters based upon conditional residuals into equivalent functions based upon marginal residuals. Extensions of earlier work on GEE diagnostics follow directly, including computational formulae for one-step deletion diagnostics that measure the influence of a cluster of observations on the estimated regression parameters and on the overall marginal mean or association model fit. The diagnostic formulae are evaluated with simulations studies and with an application concerning an assessment of factors associated with health maintenance visits in primary care medical practices. The application and the simulations demonstrate that the proposed cluster-deletion diagnostics for alternating logistic regressions are good approximations of their exact fully iterated counterparts. PMID:22777960
NASA Astrophysics Data System (ADS)
Sun, Dihua; Chen, Dong; Zhao, Min; Liu, Weining; Zheng, Linjiang
2018-07-01
In this paper, the general nonlinear car-following model with multi-time delays is investigated in order to describe the reactions of vehicle to driving behavior. Platoon stability and string stability criteria are obtained for the general nonlinear car-following model. Burgers equation and Korteweg de Vries (KdV) equation and their solitary wave solutions are derived adopting the reductive perturbation method. We investigate the properties of typical optimal velocity model using both analytic and numerical methods, which estimates the impact of delays about the evolution of traffic congestion. The numerical results show that time delays in sensing relative movement is more sensitive to the stability of traffic flow than time delays in sensing host motion.
Simmons, Rebecca K.; Coleman, Ruth L.; Price, Hermione C.; Holman, Rury R.; Khaw, Kay-Tee; Wareham, Nicholas J.; Griffin, Simon J.
2009-01-01
OBJECTIVE The purpose of this study was to examine the performance of the UK Prospective Diabetes Study (UKPDS) Risk Engine (version 3) and the Framingham risk equations (2008) in estimating cardiovascular disease (CVD) incidence in three populations: 1) individuals with known diabetes; 2) individuals with nondiabetic hyperglycemia, defined as A1C ≥6.0%; and 3) individuals with normoglycemia defined as A1C <6.0%. RESEARCH DESIGN AND METHODS This was a population-based prospective cohort (European Prospective Investigation of Cancer-Norfolk). Participants aged 40–79 years recruited from U.K. general practices attended a health examination (1993–1998) and were followed for CVD events/death until April 2007. CVD risk estimates were calculated for 10,137 individuals. RESULTS Over 10.1 years, there were 69 CVD events in the diabetes group (25.4%), 160 in the hyperglycemia group (17.7%), and 732 in the normoglycemia group (8.2%). Estimated CVD 10-year risk in the diabetes group was 33 and 37% using the UKPDS and Framingham equations, respectively. In the hyperglycemia group, estimated CVD risks were 31 and 22%, respectively, and for the normoglycemia group risks were 20 and 14%, respectively. There were no significant differences in the ability of the risk equations to discriminate between individuals at different risk of CVD events in each subgroup; both equations overestimated CVD risk. The Framingham equations performed better in the hyperglycemia and normoglycemia groups as they did not overestimate risk as much as the UKPDS Risk Engine, and they classified more participants correctly. CONCLUSIONS Both the UKPDS Risk Engine and Framingham risk equations were moderately effective at ranking individuals and are therefore suitable for resource prioritization. However, both overestimated true risk, which is important when one is using scores to communicate prognostic information to individuals. PMID:19114615
NASA Technical Reports Server (NTRS)
Eckert, W. T.; Mort, K. W.; Jope, J.
1976-01-01
General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.
On the applicability of integrated circuit technology to general aviation orientation estimation
NASA Technical Reports Server (NTRS)
Debra, D. B.; Tashker, M. G.
1976-01-01
The criteria of the significant value of the panel instruments used in general aviation were examined and kinematic equations were added for comparison. An instrument survey was performed to establish the present state of the art in linear and angular accelerometers, pressure transducers, and magnetometers. A very preliminary evaluation was done of the computers available for data evaluation and estimator mechanization. The mathematical model of a light twin aircraft employed in the evaluation was documented, the results of the sensor survey and the results of the design studies were presented.
Estimation of Magnitude and Frequency of Floods for Streams on the Island of Oahu, Hawaii
Wong, Michael F.
1994-01-01
This report describes techniques for estimating the magnitude and frequency of floods for the island of Oahu. The log-Pearson Type III distribution and methodology recommended by the Interagency Committee on Water Data was used to determine the magnitude and frequency of floods at 79 gaging stations that had 11 to 72 years of record. Multiple regression analysis was used to construct regression equations to transfer the magnitude and frequency information from gaged sites to ungaged sites. Oahu was divided into three hydrologic regions to define relations between peak discharge and drainage-basin and climatic characteristics. Regression equations are provided to estimate the 2-, 5-, 10-, 25-, 50-, and 100-year peak discharges at ungaged sites. Significant basin and climatic characteristics included in the regression equations are drainage area, median annual rainfall, and the 2-year, 24-hour rainfall intensity. Drainage areas for sites used in this study ranged from 0.03 to 45.7 square miles. Standard error of prediction for the regression equations ranged from 34 to 62 percent. Peak-discharge data collected through water year 1988, geographic information system (GIS) technology, and generalized least-squares regression were used in the analyses. The use of GIS seems to be a more flexible and consistent means of defining and calculating basin and climatic characteristics than using manual methods. Standard errors of estimate for the regression equations in this report are an average of 8 percent less than those published in previous studies.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.
Regional regression equations for estimation of natural streamflow statistics in Colorado
Capesius, Joseph P.; Stephens, Verlin C.
2009-01-01
The U.S. Geological Survey (USGS), in cooperation with the Colorado Water Conservation Board and the Colorado Department of Transportation, developed regional regression equations for estimation of various streamflow statistics that are representative of natural streamflow conditions at ungaged sites in Colorado. The equations define the statistical relations between streamflow statistics (response variables) and basin and climatic characteristics (predictor variables). The equations were developed using generalized least-squares and weighted least-squares multilinear regression reliant on logarithmic variable transformation. Streamflow statistics were derived from at least 10 years of streamflow data through about 2007 from selected USGS streamflow-gaging stations in the study area that are representative of natural-flow conditions. Basin and climatic characteristics used for equation development are drainage area, mean watershed elevation, mean watershed slope, percentage of drainage area above 7,500 feet of elevation, mean annual precipitation, and 6-hour, 100-year precipitation. For each of five hydrologic regions in Colorado, peak-streamflow equations that are based on peak-streamflow data from selected stations are presented for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year instantaneous-peak streamflows. For four of the five hydrologic regions, equations based on daily-mean streamflow data from selected stations are presented for 7-day minimum 2-, 10-, and 50-year streamflows and for 7-day maximum 2-, 10-, and 50-year streamflows. Other equations presented for the same four hydrologic regions include those for estimation of annual- and monthly-mean streamflow and streamflow-duration statistics for exceedances of 10, 25, 50, 75, and 90 percent. All equations are reported along with salient diagnostic statistics, ranges of basin and climatic characteristics on which each equation is based, and commentary of potential bias, which is not otherwise removed by log-transformation of the variables of the equations from interpretation of residual plots. The predictor-variable ranges can be used to assess equation applicability for ungaged sites in Colorado.
A note on implementation of decaying product correlation structures for quasi-least squares.
Shults, Justine; Guerra, Matthew W
2014-08-30
This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. Copyright © 2014 John Wiley & Sons, Ltd.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
An, Shengli; Zhang, Yanhong; Chen, Zheng
2012-12-01
To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.
NASA Astrophysics Data System (ADS)
Tlidi, M.; Averlant, E.; Vladimirov, A.; Panajotov, K.
2012-09-01
We consider a broad area vertical-cavity surface-emitting laser (VCSEL) operating below the lasing threshold and subject to optical injection and time-delayed feedback. We derive a generalized delayed Swift-Hohenberg equation for the VCSEL system, which is valid close to the nascent optical bistability. We first characterize the stationary-cavity solitons by constructing their snaking bifurcation diagram and by showing clustering behavior within the pinning region of parameters. Then, we show that the delayed feedback induces a spontaneous motion of two-dimensional (2D) cavity solitons in an arbitrary direction in the transverse plane. We characterize moving cavity solitons by estimating their threshold and calculating their velocity. Numerical 2D solutions of the governing semiconductor laser equations are in close agreement with those obtained from the delayed generalized Swift-Hohenberg equation.
NASA Astrophysics Data System (ADS)
Qin, Shanlin; Liu, Fawang; Turner, Ian W.
2018-03-01
The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.
Demura, S; Sato, S; Kitabayashi, T
2006-06-01
This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required.
Generalized recursive solutions to Ornstein-Zernike integral equations
NASA Astrophysics Data System (ADS)
Rossky, Peter J.; Dale, William D. T.
1980-09-01
Recursive procedures for the solution of a class of integral equations based on the Ornstein-Zernike equation are developed; the hypernetted chain and Percus-Yevick equations are two special cases of the class considered. It is shown that certain variants of the new procedures developed here are formally equivalent to those recently developed by Dale and Friedman, if the new recursive expressions are initialized in the same way as theirs. However, the computational solution of the new equations is significantly more efficient. Further, the present analysis leads to the identification of various graphical quantities arising in the earlier study with more familiar quantities related to pair correlation functions. The analysis is greatly facilitated by the use of several identities relating simple chain sums whose graphical elements can be written as a sum of two or more parts. In particular, the use of these identities permits renormalization of the equivalent series solution to the integral equation to be directly incorporated into the recursive solution in a straightforward manner. Formulas appropriate to renormalization with respect to long and short range parts of the pair potential, as well as more general components of the direct correlation function, are obtained. To further illustrate the utility of this approach, we show that a simple generalization of the hypernetted chain closure relation for the direct correlation function leads directly to the reference hypernetted chain (RHNC) equation due to Lado. The form of the correlation function used in the exponential approximation of Andersen and Chandler is then seen to be equivalent to the first estimate obtained from a renormalized RHNC equation.
NASA Astrophysics Data System (ADS)
Zhao, L. W.; Du, J. G.; Yin, J. L.
2018-05-01
This paper proposes a novel secured communication scheme in a chaotic system by applying generalized function projective synchronization of the nonlinear Schrödinger equation. This phenomenal approach guarantees a secured and convenient communication. Our study applied the Melnikov theorem with an active control strategy to suppress chaos in the system. The transmitted information signal is modulated into the parameter of the nonlinear Schrödinger equation in the transmitter and it is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory and the adaptive control technique, the controllers are designed to make two identical nonlinear Schrödinger equation with the unknown parameter asymptotically synchronized. The numerical simulation results of our study confirmed the validity, effectiveness and the feasibility of the proposed novel synchronization method and error estimate for a secure communication. The Chaos masking signals of the information communication scheme, further guaranteed a safer and secured information communicated via this approach.
Predicting bunching costs for the Radio Horse 9 winch
Chris B. LeDoux; Bruce W. Kling; Patrice A. Harou; Patrice A. Harou
1987-01-01
Data from field studies and a prebunching cost simulator have been assembled and converted into a general equation that can be used to estimate the prebunching cost of the Radio Horse 9 winch. The methods can be used to estimate prebunching cost for bunching under the skyline corridor for swinging with cable systems, for bunching to skid trail edge to be picked up by a...
Nikita, Efthymia
2014-03-01
The current article explores whether the application of generalized linear models (GLM) and generalized estimating equations (GEE) can be used in place of conventional statistical analyses in the study of ordinal data that code an underlying continuous variable, like entheseal changes. The analysis of artificial data and ordinal data expressing entheseal changes in archaeological North African populations gave the following results. Parametric and nonparametric tests give convergent results particularly for P values <0.1, irrespective of whether the underlying variable is normally distributed or not under the condition that the samples involved in the tests exhibit approximately equal sizes. If this prerequisite is valid and provided that the samples are of equal variances, analysis of covariance may be adopted. GLM are not subject to constraints and give results that converge to those obtained from all nonparametric tests. Therefore, they can be used instead of traditional tests as they give the same amount of information as them, but with the advantage of allowing the study of the simultaneous impact of multiple predictors and their interactions and the modeling of the experimental data. However, GLM should be replaced by GEE for the study of bilateral asymmetry and in general when paired samples are tested, because GEE are appropriate for correlated data. Copyright © 2013 Wiley Periodicals, Inc.
Aircraft Airframe Cost Estimation Using a Random Coefficients Model
1979-12-01
approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Liu, A; Byrne, N M; Ma, G; Nasreddine, L; Trinidad, T P; Kijboonchoo, K; Ismail, M N; Kagawa, M; Poh, B K; Hills, A P
2011-12-01
To develop and cross-validate bioelectrical impedance analysis (BIA) prediction equations of total body water (TBW) and fat-free mass (FFM) for Asian pre-pubertal children from China, Lebanon, Malaysia, Philippines and Thailand. Height, weight, age, gender, resistance and reactance measured by BIA were collected from 948 Asian children (492 boys and 456 girls) aged 8-10 years from the five countries. The deuterium dilution technique was used as the criterion method for the estimation of TBW and FFM. The BIA equations were developed using stepwise multiple regression analysis and cross-validated using the Bland-Altman approach. The BIA prediction equation for the estimation of TBW was as follows: TBW=0.231 × height(2)/resistance+0.066 × height+0.188 × weight+0.128 × age+0.500 × sex-0.316 × Thais-4.574 (R (2)=88.0%, root mean square error (RMSE)=1.3 kg), and for the estimation of FFM was as follows: FFM=0.299 × height(2)/resistance+0.086 × height+0.245 × weight+0.260 × age+0.901 × sex-0.415 × ethnicity (Thai ethnicity =1, others = 0)-6.952 (R (2)=88.3%, RMSE=1.7 kg). No significant difference between measured and predicted values for the whole cross-validation sample was found. However, the prediction equation for estimation of TBW/FFM tended to overestimate TBW/FFM at lower levels whereas underestimate at higher levels of TBW/FFM. Accuracy of the general equation for TBW and FFM was also valid at each body mass index category. Ethnicity influences the relationship between BIA and body composition in Asian pre-pubertal children. The newly developed BIA prediction equations are valid for use in Asian pre-pubertal children.
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Reynolds, Timothy M; Twomey, Patrick J
2007-01-01
Aims To evaluate the impact of different equations for calculation of estimated glomerular filtration rate (eGFR) on general practitioner (GP) workload. Methods Retrospective evaluation of routine workload data from a district general hospital chemical pathology laboratory serving a GP patient population of approximately 250 000. The most recent serum creatinine result from 80 583 patients was identified and used for the evaluation. eGFR was calculated using one of three different variants of the four‐parameter Modification of Diet in Renal Disease (MDRD) equation. Results The original MDRD equation (eGFR186) and the modified equation with assay‐specific data (eGFR175corrected) both identified similar numbers of patients with stage 4 and stage 5 chronic kidney disease (ChKD), but the modified equation without assay specific data (eGFR175) resulted in a significant increase in stage 4 ChKD. For stage 3 ChKD the eGFR175 identified 28.69% of the population, the eGFR186 identified 21.35% of the population and the eGFR175corrected identified 13.6% of the population. Conclusions Depending on the choice of equation there can be very large changes in the proportions of patients identified with the different stages of ChKD. Given that according to the General Medical Services Quality Framework, all patients with ChKD stages 3–5 should be included on a practice renal registry, and receive relevant drug therapy, this could have significant impacts on practice workload and drug budgets. It is essential that practices work with their local laboratories. PMID:17761741
Nonlinear differential equations for the wavefront surface at arbitrary Hartmann-plane distances.
Téllez-Quiñones, Alejandro; Malacara-Doblado, Daniel; Flores-Hernández, Ricardo; Gutiérrez-Hernández, David A; León-Rodríguez, Miguel
2016-03-20
In the Hartmann test, a wave aberration function W is estimated from the information of the spot diagram drawn in an observation plane. The distance from a reference plane to the observation plane, the Hartmann-plane distance, is typically chosen as z=f, where f is the radius of a reference sphere. The function W and the transversal aberrations {X,Y} calculated at the plane z=f are related by two well-known linear differential equations. Here, we propose two nonlinear differential equations to denote a more general relation between W and the transversal aberrations {U,V} calculated at any arbitrary Hartmann-plane distance z=r. We also show how to directly estimate the wavefront surface w from the information of {U,V}. The use of arbitrary r values could improve the reliability of the measurements of W, or w, when finding difficulties in adequate ray identification at z=f.
Fulton, Kara A.; Liu, Danping; Haynie, Denise L.; Albert, Paul S.
2016-01-01
The NEXT Generation Health study investigates the dating violence of adolescents using a survey questionnaire. Each student is asked to affirm or deny multiple instances of violence in his/her dating relationship. There is, however, evidence suggesting that students not in a relationship responded to the survey, resulting in excessive zeros in the responses. This paper proposes likelihood-based and estimating equation approaches to analyze the zero-inflated clustered binary response data. We adopt a mixed model method to account for the cluster effect, and the model parameters are estimated using a maximum-likelihood (ML) approach that requires a Gaussian–Hermite quadrature (GHQ) approximation for implementation. Since an incorrect assumption on the random effects distribution may bias the results, we construct generalized estimating equations (GEE) that do not require the correct specification of within-cluster correlation. In a series of simulation studies, we examine the performance of ML and GEE methods in terms of their bias, efficiency and robustness. We illustrate the importance of properly accounting for this zero inflation by reanalyzing the NEXT data where this issue has previously been ignored. PMID:26937263
Estimation of air-water gas exchange coefficient in a shallow lagoon based on 222Rn mass balance.
Cockenpot, S; Claude, C; Radakovitch, O
2015-05-01
The radon-222 mass balance is now commonly used to quantify water fluxes due to Submarine Groundwater Discharge (SGD) in coastal areas. One of the main loss terms of this mass balance, the radon evasion to the atmosphere, is based on empirical equations. This term is generally estimated using one among the many empirical equations describing the gas transfer velocity as a function of wind speed that have been proposed in the literature. These equations were, however, mainly obtained from areas of deep water and may be less appropriate for shallow areas. Here, we calculate the radon mass balance for a windy shallow coastal lagoon (mean depth of 6m and surface area of 1.55*10(8) m(2)) and use these data to estimate the radon loss to the atmosphere and the corresponding gas transfer velocity. We present new equations, adapted to our shallow water body, to express the gas transfer velocity as a function of wind speed at 10 m height (wind range from 2 to 12.5 m/s). When compared with those from the literature, these equations fit particularly well with the one of Kremer et al. (2003). Finally, we emphasize that some gas transfer exchange may always occur, even for conditions without wind. Copyright © 2015 Elsevier Ltd. All rights reserved.
On the maximum principle for complete second-order elliptic operators in general domains
NASA Astrophysics Data System (ADS)
Vitolo, Antonio
This paper is concerned with the maximum principle for second-order linear elliptic equations in a wide generality. By means of a geometric condition previously stressed by Berestycki-Nirenberg-Varadhan, Cabré was very able to improve the classical ABP estimate obtaining the maximum principle also in unbounded domains, such as infinite strips and open connected cones with closure different from the whole space. Now we introduce a new geometric condition that extends the result to a more general class of domains including the complements of hypersurfaces, as for instance the cut plane. The methods developed here allow us to deal with complete second-order equations, where the admissible first-order term, forced to be zero in a preceding result with Cafagna, depends on the geometry of the domain.
Estimation of Bid Curves in Power Exchanges using Time-varying Simultaneous-Equations Models
NASA Astrophysics Data System (ADS)
Ofuji, Kenta; Yamaguchi, Nobuyuki
Simultaneous-equations model (SEM) is generally used in economics to estimate interdependent endogenous variables such as price and quantity in a competitive, equilibrium market. In this paper, we have attempted to apply SEM to JEPX (Japan Electric Power eXchange) spot market, a single-price auction market, using the publicly available data of selling and buying bid volumes, system price and traded quantity. The aim of this analysis is to understand the magnitude of influences to the auctioned prices and quantity from the selling and buying bids, than to forecast prices and quantity for risk management purposes. In comparison with the Ordinary Least Squares (OLS) estimation where the estimation results represent average values that are independent of time, we employ a time-varying simultaneous-equations model (TV-SEM) to capture structural changes inherent in those influences, using State Space models with Kalman filter stepwise estimation. The results showed that the buying bid volumes has that highest magnitude of influences among the factors considered, exhibiting time-dependent changes, ranging as broad as about 240% of its average. The slope of the supply curve also varies across time, implying the elastic property of the supply commodity, while the demand curve remains comparatively inelastic and stable over time.
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Over, Thomas M.; Saito, Riki J.; Veilleux, Andrea G.; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey L.
2016-06-28
This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, regional skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at ungaged sites and to improve flood-quantile estimates at and near a gaged site; (2) the urbanization-adjusted annual maximum peak discharges and peak discharge quantile estimates at streamgages from 181 watersheds including the 117 study watersheds and 64 additional watersheds in the study region that were originally considered for use in the study but later deemed to be redundant.The urbanization-adjustment equations, spatial regression equations, and peak discharge quantile estimates developed in this study will be made available in the web application StreamStats, which provides automated regression-equation solutions for user-selected stream locations. Figures and tables comparing the observed and urbanization-adjusted annual maximum peak discharge records by streamgage are provided at https://doi.org/10.3133/sir20165050 for download.
Willis, Michael; Asseburg, Christian; Nilsson, Andreas; Johnsson, Kristina; Kartman, Bernt
2017-03-01
Type 2 diabetes mellitus (T2DM) is chronic and progressive and the cost-effectiveness of new treatment interventions must be established over long time horizons. Given the limited durability of drugs, assumptions regarding downstream rescue medication can drive results. Especially for insulin, for which treatment effects and adverse events are known to depend on patient characteristics, this can be problematic for health economic evaluation involving modeling. To estimate parsimonious multivariate equations of treatment effects and hypoglycemic event risks for use in parameterizing insulin rescue therapy in model-based cost-effectiveness analysis. Clinical evidence for insulin use in T2DM was identified in PubMed and from published reviews and meta-analyses. Study and patient characteristics and treatment effects and adverse event rates were extracted and the data used to estimate parsimonious treatment effect and hypoglycemic event risk equations using multivariate regression analysis. Data from 91 studies featuring 171 usable study arms were identified, mostly for premix and basal insulin types. Multivariate prediction equations for glycated hemoglobin A 1c lowering and weight change were estimated separately for insulin-naive and insulin-experienced patients. Goodness of fit (R 2 ) for both outcomes were generally good, ranging from 0.44 to 0.84. Multivariate prediction equations for symptomatic, nocturnal, and severe hypoglycemic events were also estimated, though considerable heterogeneity in definitions limits their usefulness. Parsimonious and robust multivariate prediction equations were estimated for glycated hemoglobin A 1c and weight change, separately for insulin-naive and insulin-experienced patients. Using these in economic simulation modeling in T2DM can improve realism and flexibility in modeling insulin rescue medication. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Flood characteristics of urban watersheds in the United States
Sauer, Vernon B.; Thomas, W.O.; Stricker, V.A.; Wilson, K.V.
1983-01-01
A nationwide study of flood magnitude and frequency in urban areas was made for the purpose of reviewing available literature, compiling an urban flood data base, and developing methods of estimating urban floodflow characteristics in ungaged areas. The literature review contains synopses of 128 recent publications related to urban floodflow. A data base of 269 gaged basins in 56 cities and 31 States, including Hawaii, contains a wide variety of topographic and climatic characteristics, land-use variables, indices of urbanization, and flood-frequency estimates. Three sets of regression equations were developed to estimate flood discharges for ungaged sites for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years. Two sets of regression equations are based on seven independent parameters and the third is based on three independent parameters. The only difference in the two sets of seven-parameter equations is the use of basin lag time in one and lake and reservoir storage in the other. Of primary importance in these equations is an independent estimate of the equivalent rural discharge for the ungaged basin. The equations adjust the equivalent rural discharge to an urban condition. The primary adjustment factor, or index of urbanization, is the basin development factor, a measure of the extent of development of the drainage system in the basin. This measure includes evaluations of storm drains (sewers), channel improvements, and curb-and-gutter streets. The basin development factor is statistically very significant and offers a simple and effective way of accounting for drainage development and runoff response in urban areas. Percentage of impervious area is also included in the seven-parameter equations as an additional measure of urbanization and apparently accounts for increased runoff volumes. This factor is not highly significant for large floods, which supports the generally held concept that imperviousness is not a dominant factor when soils become more saturated during large storms. Other parameters in the seven-parameter equations include drainage area size, channel slope, rainfall intensity, lake and reservoir storage, and basin lag time. These factors are all statistically significant and provide logical indices of basin conditions. The three-parameter equations include only the three most significant parameters: rural discharge, basin-development factor, and drainage area size. All three sets of regression equations provide unbiased estimates of urban flood frequency. The seven-parameter regression equations without basin lag time have average standard errors of regression varying from ? 37 percent for the 5-year flood to ? 44 percent for the 100-year flood and ? 49 percent for the 500-year flood. The other two sets of regression equations have similar accuracy. Several tests for bias, sensitivity, and hydrologic consistency are included which support the conclusion that the equations are useful throughout the United States. All estimating equations were developed from data collected on drainage basins where temporary in-channel storage, due to highway embankments, was not significant. Consequently, estimates made with these equations do not account for the reducing effect of this temporary detention storage.
The structure of the market for physicians' services.
McLean, R A
1980-01-01
In this paper, structural equations for the supply of and the demand for general practitioners' services are derived. Two variants of the model, based on alternative specifications of the role of health insurance, are tested, using data drawn from the American Medical Association's Eighth Periodic Survey of Physicians (PSP8). While results of the estimation require the rejection of the hypothesis that the market for general practitioners' service is perfectly competitive, the elasticities of demand implied are quite high. Estimates of the supply relationships support the presence of "backward bending" supplies of physicians' services, but this finding should be interpreted cautiously. PMID:7204064
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, L.; Gu, H.
2017-12-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.
Local error estimates for discontinuous solutions of nonlinear hyperbolic equations
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1989-01-01
Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Generalized two-temperature model for coupled phonon-magnon diffusion.
Liao, Bolin; Zhou, Jiawei; Chen, Gang
2014-07-11
We generalize the two-temperature model [Sanders and Walton, Phys. Rev. B 15, 1489 (1977)] for coupled phonon-magnon diffusion to include the effect of the concurrent magnetization flow, with a particular emphasis on the thermal consequence of the magnon flow driven by a nonuniform magnetic field. Working within the framework of the Boltzmann transport equation, we derive the constitutive equations for coupled phonon-magnon transport driven by gradients of both temperature and external magnetic fields, and the corresponding conservation laws. Our equations reduce to the original Sanders-Walton two-temperature model under a uniform external field, but predict a new magnon cooling effect driven by a nonuniform magnetic field in a homogeneous single-domain ferromagnet. We estimate the magnitude of the cooling effect in an yttrium iron garnet, and show it is within current experimental reach. With properly optimized materials, the predicted cooling effect can potentially supplement the conventional magnetocaloric effect in cryogenic applications in the future.
Field dynamics inference via spectral density estimation
NASA Astrophysics Data System (ADS)
Frank, Philipp; Steininger, Theo; Enßlin, Torsten A.
2017-11-01
Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
Field dynamics inference via spectral density estimation.
Frank, Philipp; Steininger, Theo; Enßlin, Torsten A
2017-11-01
Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.
Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobylev, A.V., E-mail: alexander.bobylev@kau.se; Potapenko, I.F., E-mail: firena@yandex.ru
2013-08-01
Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation processmore » very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(√(ε)), where ε is a parameter of approximation being equivalent to the time step Δt in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.« less
Methods for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma
Esralew, Rachel A.; Smith, S. Jerrod
2010-01-01
Flow statistics can be used to provide decision makers with surface-water information needed for activities such as water-supply permitting, flow regulation, and other water rights issues. Flow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no flow data are available to compute the statistics. Methods are presented in this report for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma. Flow statistics included the (1) annual (period of record), (2) seasonal (summer-autumn and winter-spring), and (3) 12 monthly duration statistics, including the 20th, 50th, 80th, 90th, and 95th percentile flow exceedances, and the annual mean-flow (mean of daily flows for the period of record). Flow statistics were calculated from daily streamflow information collected from 235 streamflow-gaging stations throughout Oklahoma and areas in adjacent states. A drainage-area ratio method is the preferred method for estimating flow statistics at an ungaged location that is on a stream near a gage. The method generally is reliable only if the drainage-area ratio of the two sites is between 0.5 and 1.5. Regression equations that relate flow statistics to drainage-basin characteristics were developed for the purpose of estimating selected flow-duration and annual mean-flow statistics for ungaged streams that are not near gaging stations on the same stream. Regression equations were developed from flow statistics and drainage-basin characteristics for 113 unregulated gaging stations. Separate regression equations were developed by using U.S. Geological Survey streamflow-gaging stations in regions with similar drainage-basin characteristics. These equations can increase the accuracy of regression equations used for estimating flow-duration and annual mean-flow statistics at ungaged stream locations in Oklahoma. Streamflow-gaging stations were grouped by selected drainage-basin characteristics by using a k-means cluster analysis. Three regions were identified for Oklahoma on the basis of the clustering of gaging stations and a manual delineation of distinguishable hydrologic and geologic boundaries: Region 1 (western Oklahoma excluding the Oklahoma and Texas Panhandles), Region 2 (north- and south-central Oklahoma), and Region 3 (eastern and central Oklahoma). A total of 228 regression equations (225 flow-duration regressions and three annual mean-flow regressions) were developed using ordinary least-squares and left-censored (Tobit) multiple-regression techniques. These equations can be used to estimate 75 flow-duration statistics and annual mean-flow for ungaged streams in the three regions. Drainage-basin characteristics that were statistically significant independent variables in the regression analyses were (1) contributing drainage area; (2) station elevation; (3) mean drainage-basin elevation; (4) channel slope; (5) percentage of forested canopy; (6) mean drainage-basin hillslope; (7) soil permeability; and (8) mean annual, seasonal, and monthly precipitation. The accuracy of flow-duration regression equations generally decreased from high-flow exceedance (low-exceedance probability) to low-flow exceedance (high-exceedance probability) . This decrease may have happened because a greater uncertainty exists for low-flow estimates and low-flow is largely affected by localized geology that was not quantified by the drainage-basin characteristics selected. The standard errors of estimate of regression equations for Region 1 (western Oklahoma) were substantially larger than those standard errors for other regions, especially for low-flow exceedances. These errors may be a result of greater variability in low flow because of increased irrigation activities in this region. Regression equations may not be reliable for sites where the drainage-basin characteristics are outside the range of values of independent vari
Mixed problems for the Korteweg-de Vries equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faminskii, A V
1999-06-30
Results are established concerning the non-local solubility and wellposedness in various function spaces of the mixed problem for the Korteweg-de Vries equation u{sub t}+u{sub xxx}+au{sub x}+uu{sub x}=f(t,x) in the half-strip (0,T)x(-{infinity},0). Some a priori estimates of the solutions are obtained using a special solution J(t,x) of the linearized KdV equation of boundary potential type. Properties of J are studied which differ essentially as x{yields}+{infinity} or x{yields}-{infinity}. Application of this boundary potential enables us in particular to prove the existence of generalized solutions with non-regular boundary values.
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
NASA Astrophysics Data System (ADS)
Ebtehaj, Isa; Bonakdari, Hossein; Khoshbin, Fatemeh
2016-10-01
To determine the minimum velocity required to prevent sedimentation, six different models were proposed to estimate the densimetric Froude number (Fr). The dimensionless parameters of the models were applied along with a combination of the group method of data handling (GMDH) and the multi-target genetic algorithm. Therefore, an evolutionary design of the generalized GMDH was developed using a genetic algorithm with a specific coding scheme so as not to restrict connectivity configurations to abutting layers only. In addition, a new preserving mechanism by the multi-target genetic algorithm was utilized for the Pareto optimization of GMDH. The results indicated that the most accurate model was the one that used the volumetric concentration of sediment (CV), relative hydraulic radius (d/R), dimensionless particle number (Dgr) and overall sediment friction factor (λs) in estimating Fr. Furthermore, the comparison between the proposed method and traditional equations indicated that GMDH is more accurate than existing equations.
Lee, Jinhyung; Choi, Jae-Young
2016-04-05
The benefits of health information technology (IT) adoption have been reported in the literature, but whether health IT investment increases revenue generation remains an important research question. Texas hospital data obtained from the American Hospital Association (AHA) for 2007-2010 were used to investigate the association of health IT expenses and hospital revenue. The generalized estimation equation (GEE) with an independent error component was used to model the data controlling for cluster error within hospitals. We found that health IT expenses were significantly and positively associated with hospital revenue. Our model predicted that a 100% increase in health IT expenditure would result in an 8% increase in total revenue. The effect of health IT was more associated with gross outpatient revenue than gross inpatient revenue. Increased health IT expenses were associated with greater hospital revenue. Future research needs to confirm our findings with a national sample of hospitals.
Noori, Nazanin; Wald, Ron; Sharma Parpia, Arti; Goldstein, Marc B
2018-01-01
Accurate assessment of total body water (TBW) is essential for the evaluation of dialysis adequacy (Kt/V urea ). The Watson formula, which is recommended for the calculation of TBW, was derived in healthy volunteers thereby leading to potentially inaccurate TBW estimates in maintenance hemodialysis recipients. Bioimpedance spectroscopy (BIS) may be a robust alternative for the measurement of TBW in hemodialysis recipients. The primary objective of this study was to evaluate the accuracy of Watson formula-derived TBW estimates as compared with TBW measured with BIS. Second, we aimed to identify the anthropometric characteristics that are most likely to generate inaccuracy when using the Watson formula to calculate TBW. Finally, we derived novel anthropometric equations for the more accurate estimation of TBW. This was a cross-sectional study of prevalent in-center HD patients at St Michael's Hospital. One hundred eighty-four hemodialysis patients (109 men and 75 women) were evaluated in this study. Anthropometric measurements including weight, height, waist circumference, midarm circumference, and 4-site skinfold (biceps, triceps, subscapular, and suprailiac) thickness were measured; fat mass was measured using the formula by Durnin and Womersley. We measured TBW by BIS using the Body Composition Monitor (Fresenius Medical Care, Bad Homburg, Germany). We used the Bland-Altman method to calculate the difference between the TBW derived from the Watson method and the BIS. To derive new equations for TBW estimation, Pearson's correlation coefficients between BIS-TBW (the reference test) and other variables were examined. We used the least squares regression analysis to develop parsimonious equations to predict TBW. TBW values based on the Watson method had a high correlation with BIS-TBW (correlation coefficients = 0.87 and P < .001). Despite the high correlation, the Watson formula overestimated TBW by 5.1 (4.5-5.8) liters and 3.8 (3.0-4.5) liters, in men and women, respectively. Higher fat mass and waist circumference (general and abdominal obesity) were correlated with the greater TBW overestimation by the Watson formula. We created separate equations for men and women based on weight and waist circumference. The main limitation of our study was the lack of an external validation for our novel estimating equation. Furthermore, though BIS has been validated against traditional reference standards, our assumption that it represents the "gold standard" for body compartment assessment may be flawed. The Watson formula generally overestimates TBW in chronic dialysis recipients, particularly in patients with the highest waist circumference. Widespread reliance on the Watson formula for derivation of TBW may lead to the underestimation of Kt/V urea. .
Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers
NASA Technical Reports Server (NTRS)
Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.
NASA Astrophysics Data System (ADS)
Khaleghi, Mohammad Reza; Varvani, Javad
2018-02-01
Complex and variable nature of the river sediment yield caused many problems in estimating the long-term sediment yield and problems input into the reservoirs. Sediment Rating Curves (SRCs) are generally used to estimate the suspended sediment load of the rivers and drainage watersheds. Since the regression equations of the SRCs are obtained by logarithmic retransformation and have a little independent variable in this equation, they also overestimate or underestimate the true sediment load of the rivers. To evaluate the bias correction factors in Kalshor and Kashafroud watersheds, seven hydrometric stations of this region with suitable upstream watershed and spatial distribution were selected. Investigation of the accuracy index (ratio of estimated sediment yield to observed sediment yield) and the precision index of different bias correction factors of FAO, Quasi-Maximum Likelihood Estimator (QMLE), Smearing, and Minimum-Variance Unbiased Estimator (MVUE) with LSD test showed that FAO coefficient increases the estimated error in all of the stations. Application of MVUE in linear and mean load rating curves has not statistically meaningful effects. QMLE and smearing factors increased the estimated error in mean load rating curve, but that does not have any effect on linear rating curve estimation.
Wang, Jinghua; Xie, Peng; Huang, Jian-Min; Qu, Yan; Zhang, Fang; Wei, Ling-Ge; Fu, Peng; Huang, Xiao-Jie
2016-12-01
To verify whether the new Asian modified CKD-EPI equation improved the performance of original one in determining GFR in Chinese patients with CKD. A well-designed paired cohort was set up. Measured GFR (mGFR) was the result of 99m Tc-diethylene triamine pentaacetic acid ( 99m Tc-DTPA) dual plasma sample clearance method. The estimated GFR (eGFR) was the result of the CKD-EPI equation (eGFR1) and the new Asian modified CKD-EPI equation (eGFR2). The comparisons were performed to evaluate the superiority of the eGFR2 in bias, accuracy, precision, concordance correlation coefficient and the slope of regression equation and measure agreement. A total of 195 patients were enrolled and analyzed. The new Asian modified CKD-EPI equation improved the performance of the original one in bias and accuracy. However, nearly identical performance was observed in the respect of precision, concordance correlation coefficient, slope of eGFR against mGFR and 95 % limit of agreement. In the subgroup of GFR < 60 mL min -1 /1.73 m 2 , the bias of eGFR1 was less than eGFR2 but they have comparable precision and accuracy. In the subgroup of GFR > 60 mL min -1 /1.73 m 2 , eGFR2 performed better than eGFR1 in terms of bias and accuracy. The new Asian modified CKD-EPI equation can lead to more accurate GFR estimation in Chinese patients with CKD in general practice, especially in the higher GFR group.
Predictive Variables of Half-Marathon Performance for Male Runners
Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A.; García-López, Juan
2017-01-01
The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO2max, speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance. Key points The present study obtained four equations involving anthropometric, training, physiological and biomechanical variables to estimate half-marathon performance. These equations were validated in a different population, demonstrating narrows ranges of prediction than previous studies and also their consistency. As a novelty, some biomechanical variables (i.e. step length and step rate at RCT, and maximal step length) have been related to half-marathon performance. PMID:28630571
Van Vlaenderen, Ilse; Van Bellinghen, Laure-Anne; Meier, Genevieve; Nautrup, Barbara Poulsen
2013-01-22
Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses.
Computation and visualization of geometric partial differential equations
NASA Astrophysics Data System (ADS)
Tiee, Christopher L.
The chief goal of this work is to explore a modern framework for the study and approximation of partial differential equations, recast common partial differential equations into this framework, and prove theorems about such equations and their approximations. A central motivation is to recognize and respect the essential geometric nature of such problems, and take it into consideration when approximating. The hope is that this process will lead to the discovery of more refined algorithms and processes and apply them to new problems. In the first part, we introduce our quantities of interest and reformulate traditional boundary value problems in the modern framework. We see how Hilbert complexes capture and abstract the most important properties of such boundary value problems, leading to generalizations of important classical results such as the Hodge decomposition theorem. They also provide the proper setting for numerical approximations. We also provide an abstract framework for evolution problems in these spaces: Bochner spaces. We next turn to approximation. We build layers of abstraction, progressing from functions, to differential forms, and finally, to Hilbert complexes. We explore finite element exterior calculus (FEEC), which allows us to approximate solutions involving differential forms, and analyze the approximation error. In the second part, we prove our central results. We first prove an extension of current error estimates for the elliptic problem in Hilbert complexes. This extension handles solutions with nonzero harmonic part. Next, we consider evolution problems in Hilbert complexes and prove abstract error estimates. We apply these estimates to the problem for Riemannian hypersurfaces in R. {n+1},generalizing current results for open subsets of R. {n}. Finally, we applysome of the concepts to a nonlinear problem, the Ricci flow on surfaces, and use tools from nonlinear analysis to help develop and analyze the equations. In the appendices, we detail some additional motivation and a source for further examples: canonical geometries that are realized as steady-state solutions to parabolic equations similar to that of Ricci flow. An eventual goal is to compute such solutions using the methods of the previous chapters.
A novel body circumferences-based estimation of percentage body fat.
Lahav, Yair; Epstein, Yoram; Kedem, Ron; Schermann, Haggai
2018-03-01
Anthropometric measures of body composition are often used for rapid and cost-effective estimation of percentage body fat (%BF) in field research, serial measurements and screening. Our aim was to develop a validated estimate of %BF for the general population, based on simple body circumferences measures. The study cohort consisted of two consecutive samples of health club members, designated as 'development' (n 476, 61 % men, 39 % women) and 'validation' (n 224, 50 % men, 50 % women) groups. All subjects underwent anthropometric measurements as part of their registration to a health club. Dual-energy X-ray absorptiometry (DEXA) scan was used as the 'gold standard' estimate of %BF. Linear regressions where used to construct the predictive equation (%BFcal). Bland-Altman statistics, Lin concordance coefficients and percentage of subjects falling within 5 % of %BF estimate by DEXA were used to evaluate accuracy and precision of the equation. The variance inflation factor was used to check multicollinearity. Two distinct equations were developed for men and women: %BFcal (men)=10·1-0·239H+0·8A-0·5N; %BFcal (women)=19·2-0·239H+0·8A-0·5N (H, height; A, abdomen; N, neck, all in cm). Bland-Altman differences were randomly distributed and showed no fixed bias. Lin concordance coefficients of %BFcal were 0·89 in men and 0·86 in women. About 79·5 % of %BF predictions in both sexes were within ±5 % of the DEXA value. The Durnin-Womersley skinfolds equation was less accurate in our study group for prediction of %BF than %BFcal. We conclude that %BFcal offers the advantage of obtaining a reliable estimate of %BF from simple measurements that require no sophisticated tools and only a minimal prior training and experience.
Yorkston, Emily; Turner, Catherine; Schluter, Philip J; McClure, Rod
2007-06-01
To develop a generalized estimating equation (GEE) model of childhood injury rates to quantify the effectiveness of a community-based injury prevention program implemented in 2 communities in Australia, in order to contribute to the discussion of community-based injury prevention program evaluation. An ecological study was conducted comparing injury rates in two intervention communities in rural and remote Queensland, Australia, with those of 16 control regions. A model of childhood injury was built using hospitalization injury rate data from 1 July 1991 to 30 June 2005 and 16 social variables. The model was built using GEE analysis and was used to estimate parameters and to test the effectiveness of the intervention. When social variables were controlled for, the intervention was associated with a decrease of 0.09 injuries/10,000 children aged 0-4 years (95% CI -0.29 to 0.11) in logarithmically transformed injury rates; however, this decrease was not significant (p = 0.36). The evaluation methods proposed in this study provide a way of determining the effectiveness of a community-based injury prevention program while considering the effect of baseline differences and secular changes in social variables.
Methods for estimating low-flow statistics for Massachusetts streams
Ries, Kernell G.; Friesz, Paul J.
2000-01-01
Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The streamgaging stations had from 2 to 81 years of record, with a mean record length of 37 years. The low-flow partial-record stations had from 8 to 36 streamflow measurements, with a median of 14 measurements. All basin characteristics were determined from digital map data. The basin characteristics that were statistically significant in most of the final regression equations were drainage area, the area of stratified-drift deposits per unit of stream length plus 0.1, mean basin slope, and an indicator variable that was 0 in the eastern region and 1 in the western region of Massachusetts. The equations were developed by use of weighted-least-squares regression analyses, with weights assigned proportional to the years of record and inversely proportional to the variances of the streamflow statistics for the stations. Standard errors of prediction ranged from 70.7 to 17.5 percent for the equations to predict the 7-day, 10-year low flow and 50-percent duration flow, respectively. The equations are not applicable for use in the Southeast Coastal region of the State, or where basin characteristics for the selected ungaged site are outside the ranges of those for the stations used in the regression analyses. A World Wide Web application was developed that provides streamflow statistics for data collection stations from a data base and for ungaged sites by measuring the necessary basin characteristics for the site and solving the regression equations. Output provided by the Web application for ungaged sites includes a map of the drainage-basin boundary determined for the site, the measured basin characteristics, the estimated streamflow statistics, and 90-percent prediction intervals for the estimates. An equation is provided for combining regression and correlation estimates to obtain improved estimates of the streamflow statistics for low-flow partial-record stations. An equation is also provided for combining regression and drainage-area ratio estimates to obtain improved e
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization. The applicability and accuracy of the regional regression equations depend on the basin characteristics measured for an ungaged location on a stream being within range of those used to develop the equations.
García-Ramos, Amador; Haff, Guy Gregory; Pestaña-Melero, Francisco Luis; Pérez-Castilla, Alejandro; Rojas, Francisco Javier; Balsalobre-Fernández, Carlos; Jaric, Slobodan
2017-09-05
This study compared the concurrent validity and reliability of previously proposed generalized group equations for estimating the bench press (BP) one-repetition maximum (1RM) with the individualized load-velocity relationship modelled with a two-point method. Thirty men (BP 1RM relative to body mass: 1.08 0.18 kg·kg -1 ) performed two incremental loading tests in the concentric-only BP exercise and another two in the eccentric-concentric BP exercise to assess their actual 1RM and load-velocity relationships. A high velocity (≈ 1 m·s -1 ) and a low velocity (≈ 0.5 m·s -1 ) was selected from their load-velocity relationships to estimate the 1RM from generalized group equations and through an individual linear model obtained from the two velocities. The directly measured 1RM was highly correlated with all predicted 1RMs (r range: 0.847-0.977). The generalized group equations systematically underestimated the actual 1RM when predicted from the concentric-only BP (P <0.001; effect size [ES] range: 0.15-0.94), but overestimated it when predicted from the eccentric-concentric BP (P <0.001; ES range: 0.36-0.98). Conversely, a low systematic bias (range: -2.3-0.5 kg) and random errors (range: 3.0-3.8 kg), no heteroscedasticity of errors (r 2 range: 0.053-0.082), and trivial ES (range: -0.17-0.04) were observed when the prediction was based on the two-point method. Although all examined methods reported the 1RM with high reliability (CV≤5.1%; ICC≥0.89), the direct method was the most reliable (CV<2.0%; ICC≥0.98). The quick, fatigue-free, and practical two-point method was able to predict the BP 1RM with high reliability and practically perfect validity, and therefore we recommend its use over generalized group equations.
Statistical Mechanics of Node-perturbation Learning with Noisy Baseline
NASA Astrophysics Data System (ADS)
Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato
2017-02-01
Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.
Comparison of prognostic and diagnostic surface flux modeling approaches over the Nile River Basin
USDA-ARS?s Scientific Manuscript database
Regional evapotranspiration (ET) can be estimated using diagnostic remote sensing models, generally based on principles of energy balance, or with spatially distributed prognostic models that simultaneously balance both the energy and water budgets over landscapes using predictive equations for land...
NASA Astrophysics Data System (ADS)
Tikhonov, D. A.; Sobolev, E. V.
2011-04-01
A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
Techniques for estimating magnitude and frequency of floods in Minnesota
Guetzkow, Lowell C.
1977-01-01
Estimating relations have been developed to provide engineers and designers with improved techniques for defining flow-frequency characteristics to satisfy hydraulic planning and design requirements. The magnitude and frequency of floods up to the 100-year recurrence interval can be determined for most streams in Minnesota by methods presented. By multiple regression analysis, equations have been developed for estimating flood-frequency relations at ungaged sites on natural flow streams. Eight distinct hydrologic regions are delineated within the State with boundaries defined generally by river basin divides. Regression equations are provided for each region which relate selected frequency floods to significant basin parameters. For main-stem streams, graphs are presented showing floods for selected recurrence intervals plotted against contributing drainage area. Flow-frequency estimates for intervening sites along the Minnesota River, Mississippi River, and the Red River of the North can be derived from these graphs. Flood-frequency characteristics are tabulated for 201 paging stations having 10 or more years of record.
NASA Astrophysics Data System (ADS)
Liu, Xiaomang; Liu, Changming; Brutsaert, Wilfried
2016-12-01
The performance of a nonlinear formulation of the complementary principle for evaporation estimation was investigated in 241 catchments with different climate conditions in the eastern monsoon region of China. Evaporation (Ea) calculated by the water balance equation was used as the reference. Ea estimated by the calibrated nonlinear formulation was generally in good agreement with the water balance results, especially in relatively dry catchments. The single parameter in the nonlinear formulation, namely αe as a weak analog of the alpha parameter of Priestley and Taylor (), tended to exhibit larger values in warmer and humid near-coastal areas, but smaller values in colder, drier environments inland, with a significant dependency on the aridity index (AI). The nonlinear formulation combined with the equation relating the one parameter and AI provides a promising method to estimate regional Ea with standard and routinely measured meteorological data.
Net Carbon Balance for the Brazilian Amazon
NASA Technical Reports Server (NTRS)
Houghton, R. A.
1998-01-01
The general purpose of this research was to use recent satellite-based estimates of deforestation in Brazilian Amazonia to calculate the net flux of carbon associated with deforestation and subsequent regrowth of secondary forests. We have made such a calculation, in the process comparing two estimates of deforestation and two estimates of biomass for the region. Both estimates were based on the RADAMBRASIL survey. They differed in the equations used to convert wood-volumes to total biomass. The net flux of carbon from changes in land use seems to vary from year to year, perhaps by as much as a factor of 4.
NASA Technical Reports Server (NTRS)
Suit, W. T.; Cannaday, R. L.
1979-01-01
The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooker, J.N.
This report describes an investigation of energy consumption and efficiency of oil pipelines in the US in 1978. It is based on a simulation of the actual movement of oil on a very detailed representation of the pipeline network, and it uses engineering equations to calculate the energy that pipeline pumps must have exerted on the oil to move it in this manner. The efficiencies of pumps and drivers are estimated so as to arrive at the amount of energy consumed at pumping stations. The throughput in each pipeline segment is estimated by distributing each pipeline company's reported oil movementsmore » over its segments in proportions predicted by regression equations that show typical throughput and throughput capacity as functions of pipe diameter. The form of the equations is justified by a generalized cost-engineering study of pipelining, and their parameters are estimated using new techniques developed for the purpose. A simplified model of flow scheduling is chosen on the basis of actual energy use data obtained from a few companies. The study yields energy consumption and intensiveness estimates for crude oil trunk lines, crude oil gathering lines and oil products lines, for the nation as well as by state and by pipe diameter. It characterizes the efficiency of typical pipelines of various diameters operating at capacity. Ancillary results include estimates of oil movements by state and by diameter and approximate pipeline capacity utilization nationwide.« less
Sun, Xiaodian; Jin, Li; Xiong, Momiao
2008-01-01
It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
Uncertainty analysis of a three-parameter Budyko-type equation at annual and monthly time scales
NASA Astrophysics Data System (ADS)
Mianabadi, Ameneh; Alizadeh, Amin; Sanaeinejad, Hossein; Ghahraman, Bijan; Davary, Kamran; Shahedi, Mehri; Talebi, Fatemeh
2017-04-01
The Budyko curves can estimate mean annual evaporation in catchment scale as a function of precipitation and potential evaporation. They are used for the steady-state catchments with the negligible water storage change. In the non-steady-state catchments, especially the irrigated ones, and in the small spatial and temporal scales, the water storage change is not negligible and, therefore, the Budyko curves are limited. In these cases, in addition to precipitation, another water resources are available for evaporation including groundwater depletion and initial soil moisture. Therefore, evaporation exceeds precipitation and the data does not follow the original Budyko framework. In this study, the two-parameter Budyko equation of Greve et al. (2016) was considered. They proposed a Budyko-type equation in which they changed the boundary condition of water-limited line and added a new parameter to the Fu equation. Based on Chen et al. (2013)'s suggestion, in arid regions where aridity index is more than one, the Budyko curve can be shifted to the right direction of aridity index axis. Therefore, in this study, we combined Greve et al. (2016)'s equation and Chen et al. (2013)'s equation and proposed a new equation with three parameters (y0, k, c) to estimate the monthly and annual evaporation of five semi-arid watersheds in Kavir-e-Markazi basin. E- = F(φ,y ,k,c) = 1 + (φ - c)- (1+ (1- y )k-1(φ - c)k)1k P 0 0 In this equation E, P and Φ are evaporation, precipitation and aridity index, respectively. To calibrate the new Budyko curve, we used the evaporation estimated by water balance equation for 11 water years (2002-2012). Due to the variability of watersheds characteristics and climate conditions, we used the GLUE (Generalized Likelihood Uncertainty Estimation) to calibrate the proposed equation to increase the reliability of the model. Based on the GLUE, the parameter sets with the highest value of likelihood were estimated as y0=0.02, k=3.70 and c=3.61 at annual scale and y0=0.07, k=2.50 and c=0.97 at monthly scale. The results showed that the proposed equation can estimate the annual evaporation reasonably with R2=0.93 and RMSE=18.5 mm year-1. Also it can estimate evaporation at monthly scale with R2=0.88 and RMSE=7.9 mm month-1. The posterior distribution function of the parameters showed that parameters uncertainty would decrease by GLUE method, this uncertainty reduction (and therefore the sensitivity of the equation to the parameters) is different for each parameter. Chen, X., Alimohammadi, N., Wang, D. 2013. Modeling interannual variability of seasonal evaporation and storage change based on the extended Budyko framework. Water Resources Research, 49(9):6067-6078. Greve, P., Gudmundsson, L., Orlowsky, B., Seneviratne, S.I. 2016. A two-parameter Budyko function to represent conditions under which evapotranspiration exceeds precipitation. Hydrology and Earth System Sciences, 20(6): 2195-2205. DOI:10.5194/hess-20-2195-2016.
Arvanitis, Spyros; Loukis, Euripidis N
2016-05-01
Hospitals are making big investments in various types of ICT, so it is important to investigate their effects on innovation and performance. This paper presents an empirical study in this direction, based on data for 743 hospitals from 18 European countries. We specified and estimated econometrically five equations: one for product innovation, one for process innovation and three equations for the three different dimensions of (ICT-enabled) hospital performance. All five equations included various ICT-related variables reflecting ICT infrastructure and a series of important ICT applications, some of them hospital-specific, and some others of general business use, and also ICT personnel (viewed as a kind of 'soft' ICT investment), while the performance equations also included the two innovation measures.
Relations for estimating unit-hydrograph parameters in New Mexico
Waltemeyer, Scott D.
2001-01-01
Data collected from 20 U.S. Geological Survey streamflow-gaging stations, most of which were operated in New Mexico between about 1969 and 1977, were used to define hydrograph characteristics for small New Mexico streams. Drainage areas for the gaging stations ranged from 0.23 to 18.2 square miles. Observed values for the hydrograph characteristics were determined for 87 of the most significant rainfall-runoff events at these gaging stations and were used to define regional regression relations with basin characteristics. Regional relations defined lag time (tl), time of concentration (tc), and time to peak (tp) as functions of stream length and basin shape. The regional equation developed for time of concentration for New Mexico agrees well with the Kirpich equation developed for Tennessee. The Kirpich equation is based on stream length and channel slope, whereas the New Mexico equation is based on stream length and basin shape. Both equations, however, underestimate tc when applied to larger basins where tc is greater than about 2 hours. The median ratio between tp and tc for the observed data was 0.66, which equals the value (0.67) recommended by the Natural Resources Conservation Service (formerly the Soil Conservation Service). However, the median ratio between tl and tc was only 0.42, whereas the commonly used ratio is 0.60. A relation also was developed between unit-peak discharge (qu) and time of concentration. The unit-peak discharge relation is similar in slope to the Natural Resources Conservation Service equation, but the equation developed for New Mexico in this study produces estimates of qu that range from two to three times as large as those estimated from the Natural Resources Conservation Service equation. An average value of 833 was determined for the empirical constant Kp. A default value of 484 has been used by the Natural Resources Conservation Service when site-specific data are not available. The use of a lower value of Kp in calculations generally results in a lower peak discharge. A relation between the empirical constant Kp and average channel slope was defined in this study. The predicted Kp values from the equation ranged from 530 to 964 for the 20 flood-hydrograph gaging stations. The standard error of estimate for the equation is 36 percent.
Cattell-Horn-Carroll Cognitive Abilities and Reading Achievement
ERIC Educational Resources Information Center
Benson, Nicholas
2008-01-01
Structural equation modeling procedures are applied to the standardization sample of the Woodcock-Johnson III to simultaneously estimate the effects of a psychometric general factor (g), specific cognitive abilities, and reading skills on reading achievement. The results of this study indicate that g has a strong direct relationship with basic…
ERIC Educational Resources Information Center
Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.
2012-01-01
A Monte Carlo simulation was conducted to investigate the robustness of 4 latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of nonnormality of the observed…
Estimation of watershed lag times and times of concentration for the Kansas City Area.
DOT National Transportation Integrated Search
2016-04-01
Lag time (TL) and time of concentration (TC) are two related measures of how quickly a stream responds to : runoff-producing rainfall over its watershed. In this report, a general relationship for lag time is derived from the : Manning equation for f...
NASA Technical Reports Server (NTRS)
Franca, Leopoldo P.; Loula, Abimael F. D.; Hughes, Thomas J. R.; Miranda, Isidoro
1989-01-01
Adding to the classical Hellinger-Reissner formulation, a residual form of the equilibrium equation, a new Galerkin/least-squares finite element method is derived. It fits within the framework of a mixed finite element method and is stable for rather general combinations of stress and velocity interpolations, including equal-order discontinuous stress and continuous velocity interpolations which are unstable within the Galerkin approach. Error estimates are presented based on a generalization of the Babuska-Brezzi theory. Numerical results (not presented herein) have confirmed these estimates as well as the good accuracy and stability of the method.
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
Wiley, J.B.; Atkins, John T.; Tasker, Gary D.
2000-01-01
Multiple and simple least-squares regression models for the log10-transformed 100-year discharge with independent variables describing the basin characteristics (log10-transformed and untransformed) for 267 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions of the State, designated East, North, and South. Exploratory data analysis procedures identified 31 gaging stations at which discharges are different than would be expected for West Virginia. Regional equations for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak discharges were determined by generalized least-squares regression using data from 236 gaging stations. Log10-transformed drainage area was the most significant independent variable for all regions.Equations developed in this study are applicable only to rural, unregulated, streams within the boundaries of West Virginia. The accuracy of estimating equations is quantified by measuring the average prediction error (from 27.7 to 44.7 percent) and equivalent years of record (from 1.6 to 20.0 years).
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2016-08-01
In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.
Bell, Melanie L; Horton, Nicholas J; Dhillon, Haryana M; Bray, Victoria J; Vardy, Janette
2018-05-26
Patient reported outcomes (PROs) are important in oncology research; however, missing data can pose a threat to the validity of results. Psycho-oncology researchers should be aware of the statistical options for handling missing data robustly. One rarely used set of methods, which includes extensions for handling missing data, is generalized estimating equations (GEEs). Our objective was to demonstrate use of GEEs to analyze PROs with missing data in randomized trials with assessments at fixed time points. We introduce GEEs and show, with a worked example, how to use GEEs that account for missing data: inverse probability weighted GEEs and multiple imputation with GEE. We use data from an RCT evaluating a web-based brain training for cancer survivors reporting cognitive symptoms after chemotherapy treatment. The primary outcome for this demonstration is the binary outcome of cognitive impairment. Several methods are used, and results are compared. We demonstrate that estimates can vary depending on the choice of analytical approach, with odds ratios for no cognitive impairment ranging from 2.04 to 5.74. While most of these estimates were statistically significant (P < 0.05), a few were not. Researchers using PROs should use statistical methods that handle missing data in a way as to result in unbiased estimates. GEE extensions are analytic options for handling dropouts in longitudinal RCTs, particularly if the outcome is not continuous. Copyright © 2018 John Wiley & Sons, Ltd.
Hosseinbor, Ameer Pasha; Chung, Moo K; Wu, Yu-Chien; Alexander, Andrew L
2011-01-01
The estimation of the ensemble average propagator (EAP) directly from q-space DWI signals is an open problem in diffusion MRI. Diffusion spectrum imaging (DSI) is one common technique to compute the EAP directly from the diffusion signal, but it is burdened by the large sampling required. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed. One, in particular, is Diffusion Propagator Imaging (DPI) which is based on the Laplace's equation estimation of diffusion signal for each shell acquisition. Viewed intuitively in terms of the heat equation, the DPI solution is obtained when the heat distribution between temperatuere measurements at each shell is at steady state. We propose a generalized extension of DPI, Bessel Fourier Orientation Reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition. That is, the heat distribution between shell measurements is no longer at steady state. In addition to being analytical, the BFOR solution also includes an intrinsic exponential smootheing term. We illustrate the effectiveness of the proposed method by showing results on both synthetic and real MR datasets.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
Bisese, James A.
1995-01-01
Methods are presented for estimating the peak discharges of rural, unregulated streams in Virginia. A Pearson Type III distribution is fitted to the logarithms of the unregulated annual peak-discharge records from 363 stream-gaging stations in Virginia to estimate the peak discharge at these stations for recurrence intervals of 2 to 500 years. Peak-discharge characteristics for 284 unregulated stations are divided into eight regions based on physiographic province, and regressed on basin characteristics, including drainage area, main channel length, main channel slope, mean basin elevation, percentage of forest cover, mean annual precipitation, and maximum rainfall intensity. Regression equations for each region are computed by use of the generalized least-squares method, which accounts for spatial and temporal correlation between nearby gaging stations. This regression technique weights the significance of each station to the regional equation based on the length of records collected at each cation, the correlation between annual peak discharges among the stations, and the standard deviation of the annual peak discharge for each station.Drainage area proved to be the only significant explanatory variable in four regions, while other regions have as many as three significant variables. Standard errors of the regression equations range from 30 to 80 percent. Alternate equations using drainage area only are provided for the five regions with more than one significant explanatory variable.Methods and sample computations are provided to estimate peak discharges at gaged and engaged sites in Virginia for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, and to adjust the regression estimates for sites on gaged streams where nearby gaging-station records are available.
Using machine learning to model dose-response relationships.
Linden, Ariel; Yarnold, Paul R; Nallamothu, Brahmajee K
2016-12-01
Establishing the relationship between various doses of an exposure and a response variable is integral to many studies in health care. Linear parametric models, widely used for estimating dose-response relationships, have several limitations. This paper employs the optimal discriminant analysis (ODA) machine-learning algorithm to determine the degree to which exposure dose can be distinguished based on the distribution of the response variable. By framing the dose-response relationship as a classification problem, machine learning can provide the same functionality as conventional models, but can additionally make individual-level predictions, which may be helpful in practical applications like establishing responsiveness to prescribed drug regimens. Using data from a study measuring the responses of blood flow in the forearm to the intra-arterial administration of isoproterenol (separately for 9 black and 13 white men, and pooled), we compare the results estimated from a generalized estimating equations (GEE) model with those estimated using ODA. Generalized estimating equations and ODA both identified many statistically significant dose-response relationships, separately by race and for pooled data. Post hoc comparisons between doses indicated ODA (based on exact P values) was consistently more conservative than GEE (based on estimated P values). Compared with ODA, GEE produced twice as many instances of paradoxical confounding (findings from analysis of pooled data that are inconsistent with findings from analyses stratified by race). Given its unique advantages and greater analytic flexibility, maximum-accuracy machine-learning methods like ODA should be considered as the primary analytic approach in dose-response applications. © 2016 John Wiley & Sons, Ltd.
Connection equation and shaly-sand correction for electrical resistivity
Lee, Myung W.
2011-01-01
Estimating the amount of conductive and nonconductive constituents in the pore space of sediments by using electrical resistivity logs generally loses accuracy where clays are present in the reservoir. Many different methods and clay models have been proposed to account for the conductivity of clay (termed the shaly-sand correction). In this study, the connectivity equation (CE), which is a new approach to model non-Archie rocks, is used to correct for the clay effect and is compared with results using the Waxman and Smits method. The CE presented here requires no parameters other than an adjustable constant, which can be derived from the resistivity of water-saturated sediments. The new approach was applied to estimate water saturation of laboratory data and to estimate gas hydrate saturations at the Mount Elbert well on the Alaska North Slope. Although not as accurate as the Waxman and Smits method to estimate water saturations for the laboratory measurements, gas hydrate saturations estimated at the Mount Elbert well using the proposed CE are comparable to estimates from the Waxman and Smits method. Considering its simplicity, it has high potential to be used to account for the clay effect on electrical resistivity measurement in other systems.
Effects of temperature on embryonic development of lake herring (Coregonus artedii)
Colby, Peter J.; Brooke, L.T.
1973-01-01
Embryonic development of lake herring (Coregonus artedii) was observed in the laboratory at 13 constant temperatures from 0.0 to 12.1 C and in Pickerel Lake (Washtenaw County, Michigan) at natural temperature regimes. Rate of development during incubation was based on progression of the embryos through 20 identifiable stages. An equation was derived to predict development stage at constant temperatures, on the general assumption that development stage (DS) is a function of time (days, D) and temperature (T). The equation should also be useful in interpreting estimates from future regressions that include other environmental variables that affect egg development. A second regression model, derived primarily for fluctuating temperatures, related development rate for stage j (DRj), expressed as the reciprocal of time, to temperature (x). The generalized equation for a development stage is: DRj = abx cx2 dx3. In general, time required for embryos to reach each stage of development in Pickerel Lake agreed closely with the time predicted from this equation, derived from our laboratory observations. Hatching time was predicted within 1 day in 1969 and within 2 days in 1970. We used the equations derived with the second model to predict the effect of the superimposition of temperature increases of 1 and 2 C on the measured temperatures in Pickerel Lake. Conceivably, hatching dates could be affected sufficiently to jeopardize the first feeding of lake herring through loss of harmony between hatching date and seasonal food availability.
Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G
2015-01-01
To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.
Zemski, Adam J; Broad, Elizabeth M; Slater, Gary J
2018-01-01
Body composition in elite rugby union athletes is routinely assessed using surface anthropometry, which can be utilized to provide estimates of absolute body composition using regression equations. This study aims to assess the ability of available skinfold equations to estimate body composition in elite rugby union athletes who have unique physique traits and divergent ethnicity. The development of sport-specific and ethnicity-sensitive equations was also pursued. Forty-three male international Australian rugby union athletes of Caucasian and Polynesian descent underwent surface anthropometry and dual-energy X-ray absorptiometry (DXA) assessment. Body fat percent (BF%) was estimated using five previously developed equations and compared to DXA measures. Novel sport and ethnicity-sensitive prediction equations were developed using forward selection multiple regression analysis. Existing skinfold equations provided unsatisfactory estimates of BF% in elite rugby union athletes, with all equations demonstrating a 95% prediction interval in excess of 5%. The equations tended to underestimate BF% at low levels of adiposity, whilst overestimating BF% at higher levels of adiposity, regardless of ethnicity. The novel equations created explained a similar amount of variance to those previously developed (Caucasians 75%, Polynesians 90%). The use of skinfold equations, including the created equations, cannot be supported to estimate absolute body composition. Until a population-specific equation is established that can be validated to precisely estimate body composition, it is advocated to use a proven method, such as DXA, when absolute measures of lean and fat mass are desired, and raw anthropometry data routinely to derive an estimate of body composition change.
Phiri, Sam; Rothenbacher, Dietrich; Neuhann, Florian
2015-01-01
Background Chronic kidney disease (CKD) is a probably underrated public health problem in Sub-Saharan-Africa, in particular in combination with HIV-infection. Knowledge about the CKD prevalence is scarce and in the available literature different methods to classify CKD are used impeding comparison and general prevalence estimates. Methods This study assessed different serum-creatinine based equations for glomerular filtration rates (eGFR) and compared them to a cystatin C based equation. The study was conducted in Lilongwe, Malawi enrolling a population of 363 adults of which 32% were HIV-positive. Results Comparison of formulae based on Bland-Altman-plots and accuracy revealed best performance for the CKD-EPI equation without the correction factor for black Americans. Analyzing the differences between HIV-positive and –negative individuals CKD-EPI systematically overestimated eGFR in comparison to cystatin C and therefore lead to underestimation of CKD in HIV-positives. Conclusions Our findings underline the importance for standardization of eGFR calculation in a Sub-Saharan African setting, to further investigate the differences with regard to HIV status and to develop potential correction factors as established for age and sex. PMID:26083345
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
Optimal Estimation of Clock Values and Trends from Finite Data
NASA Technical Reports Server (NTRS)
Greenhall, Charles
2005-01-01
We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2016-02-01
We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less
ERIC Educational Resources Information Center
Garnier-Dykstra, Laura M.; Caldeira, Kimberly M.; Vincent, Kathryn B.; O'Grady, Kevin E.; Arria, Amelia M.
2012-01-01
Objective: Examine trends in nonmedical use of prescription stimulants (NPS), including motives, routes of administration, sources, cost, and risk factors. Participants: 1,253 college students. Methods: Data were collected annually during academic years 2004-2005 through 2008-2009. Generalized estimating equations analyses evaluated longitudinal…
The Economic Consequences of Being Left-Handed: Some Sinister Results
ERIC Educational Resources Information Center
Denny, Kevin; O' Sullivan, Vincent
2007-01-01
This paper estimates the effects of handedness on earnings. Augmenting a conventional earnings equation with an indicator of left-handedness shows there is a positive effect on male earnings with manual workers enjoying a slightly larger premium. These results are inconsistent with the view that left-handers in general are handicapped either…
ERIC Educational Resources Information Center
Paalman, Carmen; van Domburgh, Lieke; Stevens, Gonneke; Vermeiren, Robert; van de Ven, Peter; Branje, Susan; Frijns, Tom; Meeus, Wim; Koot, Hans; van Lier, Pol; Jansen, Lucres; Doreleijers, Theo
2015-01-01
This longitudinal study explores differences between native Dutch and immigrant Moroccan adolescents in the relationship between internalizing and externalizing problems across time. By using generalized estimating equations (GEE), the strength and stability of associations between internalizing and externalizing problems in 159 Moroccan and 159…
Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations
Wang, Ming; Zhong, Lin
2015-01-01
In this paper, we consider the use of H(div) elements in the velocity–pressure formulation to discretize Stokes equations in two dimensions. We address the error estimate of the element pair RT0–P0, which is known to be suboptimal, and render the error estimate optimal by the symmetry of the grids and by the superconvergence result of Lagrange inter-polant. By enlarging RT0 such that it becomes a modified BDM-type element, we develop a new discretization BDM1b–P0. We, therefore, generalize the classical MAC scheme on rectangular grids to triangular grids and retain all the desirable properties of the MAC scheme: exact divergence-free, solver-friendly, and local conservation of physical quantities. Further, we prove that the proposed discretization BDM1b–P0 achieves the optimal convergence rate for both velocity and pressure on general quasi-uniform grids, and one and half order convergence rate for the vorticity and a recovered pressure. We demonstrate the validity of theories developed here by numerical experiments. PMID:26041948
Donato, David I.
2013-01-01
A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.
Renal function assessment in patients with systemic lupus erythematosus.
Martínez-Martínez, Marco Ulises; Borjas-García, Jaime Antonio; Magaña-Aquino, Martín; Cuevas-Orta, Enrique; Llamazares-Azuara, Lilia; Abud-Mendoza, Carlos
2012-08-01
Few studies have evaluated the glomerular filtration rate (GFR) in patients with systemic lupus erythematosus (SLE). Even though the National Kidney Foundation (NKF) suggests using the equations to estimate GFR, rheumatologists continue using creatinine clearance (CCl). The main objective of our study was the assessment of different equations to estimate GFR in patients with SLE: Simplified MDRD study equation (sMDRD), CCl, Cockcroft Gault (CG), CG calculated with ideal weight (CGi), Mayo Clinic Quadratic (MCQ), and Chronic Kidney Disease Epidemiology Collaboration Equation (CKD-EPI). CKD-EPI was considered as the reference standard, and it was compared with the other equations to evaluate bias, correlation (r), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), percentage of measurement of GFR between 70-130% of GFR measured through CKD-EPI (P30) and to compute the ROC curves. Adequacy of the 24-h urine collection was evaluated. To classify patients into GFR < 60 ml/min/1.73 m(2), the best sensitivity and NVP were obtained with sMDRD: the best PPV and specificity with MCQ. P30 was 99.3% with sMDRD, 77.5% CCl, 91.7% CG, 96.7% CGi, and 77.2% with MCQ. The lowest bias was for sMDRD and the highest for CCl. Only 159 (52.6%) urine collections were considered adequate, and when these patients were re-evaluated, the statistical results improved for CCl. CGi was better in general than CG. CCl should not be considered as an adequate GFR estimation. Ideal weight is better than real weight to calculate GFR through CG in patients with SLE.
Gingerich, Stephen B.
2005-01-01
Flow-duration statistics under natural (undiverted) and diverted flow conditions were estimated for gaged and ungaged sites on 21 streams in northeast Maui, Hawaii. The estimates were made using the optimal combination of continuous-record gaging-station data, low-flow measurements, and values determined from regression equations developed as part of this study. Estimated 50- and 95-percent flow duration statistics for streams are presented and the analyses done to develop and evaluate the methods used in estimating the statistics are described. Estimated streamflow statistics are presented for sites where various amounts of streamflow data are available as well as for locations where no data are available. Daily mean flows were used to determine flow-duration statistics for continuous-record stream-gaging stations in the study area following U.S. Geological Survey established standard methods. Duration discharges of 50- and 95-percent were determined from total flow and base flow for each continuous-record station. The index-station method was used to adjust all of the streamflow records to a common, long-term period. The gaging station on West Wailuaiki Stream (16518000) was chosen as the index station because of its record length (1914-2003) and favorable geographic location. Adjustments based on the index-station method resulted in decreases to the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow computed on the basis of short-term records that averaged 7, 3, 4, and 1 percent, respectively. For the drainage basin of each continuous-record gaged site and selected ungaged sites, morphometric, geologic, soil, and rainfall characteristics were quantified using Geographic Information System techniques. Regression equations relating the non-diverted streamflow statistics to basin characteristics of the gaged basins were developed using ordinary-least-squares regression analyses. Rainfall rate, maximum basin elevation, and the elongation ratio of the basin were the basin characteristics used in the final regression equations for 50-percent duration total flow and base flow. Rainfall rate and maximum basin elevation were used in the final regression equations for the 95-percent duration total flow and base flow. The relative errors between observed and estimated flows ranged from 10 to 20 percent for the 50-percent duration total flow and base flow, and from 29 to 56 percent for the 95-percent duration total flow and base flow. The regression equations developed for this study were used to determine the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow at selected ungaged diverted and undiverted sites. Estimated streamflow, prediction intervals, and standard errors were determined for 48 ungaged sites in the study area and for three gaged sites west of the study area. Relative errors were determined for sites where measured values of 95-percent duration discharge of total flow were available. East of Keanae Valley, the 95-percent duration discharge equation generally underestimated flow, and within and west of Keanae Valley, the equation generally overestimated flow. Reduction in 50- and 95-percent flow-duration values in stream reaches affected by diversions throughout the study area average 58 to 60 percent.
The use of generalized estimating equations in the analysis of motor vehicle crash data.
Hutchings, Caroline B; Knight, Stacey; Reading, James C
2003-01-01
The purpose of this study was to determine if it is necessary to use generalized estimating equations (GEEs) in the analysis of seat belt effectiveness in preventing injuries in motor vehicle crashes. The 1992 Utah crash dataset was used, excluding crash participants where seat belt use was not appropriate (n=93,633). The model used in the 1996 Report to Congress [Report to congress on benefits of safety belts and motorcycle helmets, based on data from the Crash Outcome Data Evaluation System (CODES). National Center for Statistics and Analysis, NHTSA, Washington, DC, February 1996] was analyzed for all occupants with logistic regression, one level of nesting (occupants within crashes), and two levels of nesting (occupants within vehicles within crashes) to compare the use of GEEs with logistic regression. When using one level of nesting compared to logistic regression, 13 of 16 variance estimates changed more than 10%, and eight of 16 parameter estimates changed more than 10%. In addition, three of the independent variables changed from significant to insignificant (alpha=0.05). With the use of two levels of nesting, two of 16 variance estimates and three of 16 parameter estimates changed more than 10% from the variance and parameter estimates in one level of nesting. One of the independent variables changed from insignificant to significant (alpha=0.05) in the two levels of nesting model; therefore, only two of the independent variables changed from significant to insignificant when the logistic regression model was compared to the two levels of nesting model. The odds ratio of seat belt effectiveness in preventing injuries was 12% lower when a one-level nested model was used. Based on these results, we stress the need to use a nested model and GEEs when analyzing motor vehicle crash data.
Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations
NASA Astrophysics Data System (ADS)
Loseille, A.; Dervieux, A.; Alauzet, F.
2010-04-01
This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, Longxiao; Gu, Hanming
2018-03-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.
Dynamically consistent hydrography and absolute velocity in the eastern North Atlantic Ocean
NASA Technical Reports Server (NTRS)
Wunsch, Carl
1994-01-01
The problem of mapping a dynamically consistent hydrographic field and associated absolute geostrophic flow in the eastern North Atlantic between 24 deg and 36 deg N is related directly to the solution of the so-called thermocline equations. A nonlinear optimization problem involving Needler's P equation is solved to find the hydrography and resulting flow that minimizes the vertical mixing above about 1500 m in the ocean and is simultaneously consistent with the observations. A sharp minimum (at least in some dimensions) is found, apparently corresponding to a solution nearly conserving potential vorticity and with vertical eddy coefficient less than about 10(exp -5) sq m/s. Estimates of `residual' quantities such as eddy coefficients are extremely sensitive to slight modifications to the observed fields. Boundary conditions, vertical velocities, etc., are a product of the optimization and produce estimates differing quantitatively from prior ones relying directly upon observed hydrography. The results are generally insensitive to particular elements of the solution methodology, but many questions remain concerning the extent to which different synoptic sections can be asserted to represent the same ocean. The method can be regarded as a practical generalization of the beta spiral and geostrophic balance inverses for the estimate of absolute geostrophic flows. Numerous improvements to the methodology used in this preliminary attempt are possible.
Guidelines for a graph-theoretic implementation of structural equation modeling
Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William
2012-01-01
Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.
Weather adjustment using seemingly unrelated regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noll, T.A.
1995-05-01
Seemingly unrelated regression (SUR) is a system estimation technique that accounts for time-contemporaneous correlation between individual equations within a system of equations. SUR is suited to weather adjustment estimations when the estimation is: (1) composed of a system of equations and (2) the system of equations represents either different weather stations, different sales sectors or a combination of different weather stations and different sales sectors. SUR utilizes the cross-equation error values to develop more accurate estimates of the system coefficients than are obtained using ordinary least-squares (OLS) estimation. SUR estimates can be generated using a variety of statistical software packagesmore » including MicroTSP and SAS.« less
A unified Fourier theory for time-of-flight PET data
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2016-01-01
Fully 3D time-of-flight (TOF) PET scanners offer the potential of previously unachievable image quality in clinical PET imaging. TOF measurements add another degree of redundancy for cylindrical PET scanners and make photon-limited TOF-PET imaging more robust than non-TOF PET imaging. The data space for 3D TOF-PET data is five-dimensional with two degrees of redundancy. Previously, consistency equations were used to characterize the redundancy of TOF-PET data. In this paper, we first derive two Fourier consistency equations and Fourier-John equation for 3D TOF PET based on the generalized projection-slice theorem; the three partial differential equations (PDEs) are the dual of the sinogram consistency equations and John's equation. We then solve the three PDEs using the method of characteristics. The two degrees of entangled redundancy of the TOF-PET data can be explicitly elicited and exploited by the solutions of the PDEs along the characteristic curves, which gives a complete understanding of the rich structure of the 3D X-ray transform with TOF measurement. Fourier rebinning equations and other mapping equations among different types of PET data are special cases of the general solutions. We also obtain new Fourier rebinning and consistency equations (FORCEs) from other special cases of the general solutions, and thus we obtain a complete scheme to convert among different types of PET data: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF data. The new FORCEs can be used as new Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. Further, we give a geometric interpretation of the general solutions—the two families of characteristic curves can be obtained by respectively changing the azimuthal and co-polar angles of the biorthogonal coordinates in Fourier space. We conclude the unified Fourier theory by showing that the Fourier consistency equations are necessary and sufficient for 3D X-ray transform with TOF measurement. Finally, we give numerical examples of inverse rebinning for a 3D TOF PET and Fourier-based rebinning for a 2D TOF PET using the FORCEs to show the efficacy of the unified Fourier solutions. PMID:26689836
A unified Fourier theory for time-of-flight PET data.
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2016-01-21
Fully 3D time-of-flight (TOF) PET scanners offer the potential of previously unachievable image quality in clinical PET imaging. TOF measurements add another degree of redundancy for cylindrical PET scanners and make photon-limited TOF-PET imaging more robust than non-TOF PET imaging. The data space for 3D TOF-PET data is five-dimensional with two degrees of redundancy. Previously, consistency equations were used to characterize the redundancy of TOF-PET data. In this paper, we first derive two Fourier consistency equations and Fourier-John equation for 3D TOF PET based on the generalized projection-slice theorem; the three partial differential equations (PDEs) are the dual of the sinogram consistency equations and John's equation. We then solve the three PDEs using the method of characteristics. The two degrees of entangled redundancy of the TOF-PET data can be explicitly elicited and exploited by the solutions of the PDEs along the characteristic curves, which gives a complete understanding of the rich structure of the 3D x-ray transform with TOF measurement. Fourier rebinning equations and other mapping equations among different types of PET data are special cases of the general solutions. We also obtain new Fourier rebinning and consistency equations (FORCEs) from other special cases of the general solutions, and thus we obtain a complete scheme to convert among different types of PET data: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF data. The new FORCEs can be used as new Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. Further, we give a geometric interpretation of the general solutions--the two families of characteristic curves can be obtained by respectively changing the azimuthal and co-polar angles of the biorthogonal coordinates in Fourier space. We conclude the unified Fourier theory by showing that the Fourier consistency equations are necessary and sufficient for 3D x-ray transform with TOF measurement. Finally, we give numerical examples of inverse rebinning for a 3D TOF PET and Fourier-based rebinning for a 2D TOF PET using the FORCEs to show the efficacy of the unified Fourier solutions.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.; Roberts, J.W.
1990-01-01
Multiple-regression equations are presented for estimating flood-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years at ungaged sites on rural, unregulated streams in Ohio. The average standard errors of prediction for the equations range from 33.4% to 41.4%. Peak discharge estimates determined by log-Pearson Type III analysis using data collected through the 1987 water year are reported for 275 streamflow-gaging stations. Ordinary least-squares multiple-regression techniques were used to divide the State into three regions and to identify a set of basin characteristics that help explain station-to- station variation in the log-Pearson estimates. Contributing drainage area, main-channel slope, and storage area were identified as suitable explanatory variables. Generalized least-square procedures, which include historical flow data and account for differences in the variance of flows at different gaging stations, spatial correlation among gaging station records, and variable lengths of station record were used to estimate the regression parameters. Weighted peak-discharge estimates computed as a function of the log-Pearson Type III and regression estimates are reported for each station. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site located on the same stream. Limitations and shortcomings cited in an earlier report on the magnitude and frequency of floods in Ohio are addressed in this study. Geographic bias is no longer evident for the Maumee River basin of northwestern Ohio. No bias is found to be associated with the forested-area characteristic for the range used in the regression analysis (0.0 to 99.0%), nor is this characteristic significant in explaining peak discharges. Surface-mined area likewise is not significant in explaining peak discharges, and the regression equations are not biased when applied to basins having approximately 30% or less surface-mined area. Analyses of residuals indicate that the equations tend to overestimate flood-peak discharges for basins having approximately 30% or more surface-mined area. (USGS)
Characteristics of the April 2007 Flood at 10 Streamflow-Gaging Stations in Massachusetts
Zarriello, Phillip J.; Carlson, Carl S.
2009-01-01
A large 'nor'easter' storm on April 15-18, 2007, brought heavy rains to the southern New England region that, coupled with normal seasonal high flows and associated wet soil-moisture conditions, caused extensive flooding in many parts of Massachusetts and neighboring states. To characterize the magnitude of the April 2007 flood, a peak-flow frequency analysis was undertaken at 10 selected streamflow-gaging stations in Massachusetts to determine the magnitude of flood flows at 5-, 10-, 25-, 50-, 100-, 200-, and 500-year return intervals. The magnitude of flood flows at various return intervals were determined from the logarithms of the annual peaks fit to a Pearson Type III probability distribution. Analysis included augmenting the station record with longer-term records from one or more nearby stations to provide a common period of comparison that includes notable floods in 1936, 1938, and 1955. The April 2007 peak flow was among the highest recorded or estimated since 1936, often ranking between the 3d and 5th highest peak for that period. In general, the peak-flow frequency analysis indicates the April 2007 peak flow has an estimated return interval between 25 and 50 years; at stations in the northeastern and central areas of the state, the storm was less severe resulting in flows with return intervals of about 5 and 10 years, respectively. At Merrimack River at Lowell, the April 2007 peak flow approached a 100-year return interval that was computed from post-flood control records and the 1936 and 1938 peak flows adjusted for flood control. In general, the magnitude of flood flow for a given return interval computed from the streamflow-gaging station period-of-record was greater than those used to calculate flood profiles in various community flood-insurance studies. In addition, the magnitude of the updated flood flow and current (2008) stage-discharge relation at a given streamflow-gaging station often produced a flood stage that was considerably different than the flood stage indicated in the flood-insurance study flood profile at that station. Equations for estimating the flow magnitudes for 5-, 10-, 25-, 50-, 100-, 200-, and 500-year floods were developed from the relation of the magnitude of flood flows to drainage area calculated from the six streamflow-gaging stations with the longest unaltered record. These equations produced a more conservative estimate of flood flows (higher discharges) than the existing regional equations for estimating flood flows at ungaged rivers in Massachusetts. Large differences in the magnitude of flood flows for various return intervals determined in this study compared to results from existing regional equations and flood insurance studies indicate a need for updating regional analyses and equations for estimating the expected magnitude of flood flows in Massachusetts.
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability
Ryberg, Karen R.
2006-01-01
This report presents the results of a study by the U.S. Geological Survey, done in cooperation with the Bureau of Reclamation, U.S. Department of the Interior, to estimate water-quality constituent concentrations in the Red River of the North at Fargo, North Dakota. Regression analysis of water-quality data collected in 2003-05 was used to estimate concentrations and loads for alkalinity, dissolved solids, sulfate, chloride, total nitrite plus nitrate, total nitrogen, total phosphorus, and suspended sediment. The explanatory variables examined for regression relation were continuously monitored physical properties of water-streamflow, specific conductance, pH, water temperature, turbidity, and dissolved oxygen. For the conditions observed in 2003-05, streamflow was a significant explanatory variable for all estimated constituents except dissolved solids. pH, water temperature, and dissolved oxygen were not statistically significant explanatory variables for any of the constituents in this study. Specific conductance was a significant explanatory variable for alkalinity, dissolved solids, sulfate, and chloride. Turbidity was a significant explanatory variable for total phosphorus and suspended sediment. For the nutrients, total nitrite plus nitrate, total nitrogen, and total phosphorus, cosine and sine functions of time also were used to explain the seasonality in constituent concentrations. The regression equations were evaluated using common measures of variability, including R2, or the proportion of variability in the estimated constituent explained by the regression equation. R2 values ranged from 0.703 for total nitrogen concentration to 0.990 for dissolved-solids concentration. The regression equations also were evaluated by calculating the median relative percentage difference (RPD) between measured constituent concentration and the constituent concentration estimated by the regression equations. Median RPDs ranged from 1.1 for dissolved solids to 35.2 for total nitrite plus nitrate. Regression equations also were used to estimate daily constituent loads. Load estimates can be used by water-quality managers for comparison of current water-quality conditions to water-quality standards expressed as total maximum daily loads (TMDLs). TMDLs are a measure of the maximum amount of chemical constituents that a water body can receive and still meet established water-quality standards. The peak loads generally occurred in June and July when streamflow also peaked.
NASA Technical Reports Server (NTRS)
Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torquato, S.; Kim, I.C.; Cule, D.
1999-02-01
We generalize the Brownian motion simulation method of Kim and Torquato [J. Appl. Phys. {bold 68}, 3892 (1990)] to compute the effective conductivity, dielectric constant and diffusion coefficient of digitized composite media. This is accomplished by first generalizing the {ital first-passage-time equations} to treat first-passage regions of arbitrary shape. We then develop the appropriate first-passage-time equations for digitized media: first-passage squares in two dimensions and first-passage cubes in three dimensions. A severe test case to prove the accuracy of the method is the two-phase periodic checkerboard in which conduction, for sufficiently large phase contrasts, is dominated by corners that joinmore » two conducting-phase pixels. Conventional numerical techniques (such as finite differences or elements) do not accurately capture the local fields here for reasonable grid resolution and hence lead to inaccurate estimates of the effective conductivity. By contrast, we show that our algorithm yields accurate estimates of the effective conductivity of the periodic checkerboard for widely different phase conductivities. Finally, we illustrate our method by computing the effective conductivity of the random checkerboard for a wide range of volume fractions and several phase contrast ratios. These results always lie within rigorous four-point bounds on the effective conductivity. {copyright} {ital 1999 American Institute of Physics.}« less
2013-01-01
Background Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Methods Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. Results The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. Conclusions This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses. PMID:23339290
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Virial Coefficients and Equations of State for Hard Polyhedron Fluids.
Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C
2017-10-24
Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.
SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livadiotis, G., E-mail: glivadiotis@swri.edu
2016-03-15
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less
Superposition of Polytropes in the Inner Heliosheath
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2016-03-01
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.
A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement
NASA Astrophysics Data System (ADS)
Sun, Hui; Liu, Ji-Gou
2018-07-01
This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.
Ding, A Adam; Wu, Hulin
2014-10-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.
Ding, A. Adam; Wu, Hulin
2015-01-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093
Williams, Brian A.; Dang, Qainyu; Bost, James E.; Irrgang, James J.; Orebaugh, Steven L.; Bottegal, Matthew T.; Kentor, Michael L.
2010-01-01
Background We previously reported that continuous perineural femoral analgesia reduces pain with movement during the first 2 days after anterior cruciate ligament reconstruction (ACLR, n=270), when compared with multimodal analgesia and placebo perineural femoral infusion. We now report the prospectively collected general health and knee function outcomes in the 7 days to 12 weeks after surgery in these same patients. Methods At 3 points during 12 weeks after ACLR surgery, patients completed the SF-36 General Health Survey, and the Knee Outcome Survey (KOS). Generalized Estimating Equations were implemented to evaluate the association between patient-reported survey outcomes and (i) preoperative baseline survey scores, (ii) time after surgery, and (iii) 3 nerve block treatment groups. Results Two-hundred-seventeen patients’ data were complete for analysis. In univariate and multiple regression Generalized Estimating Equations models, nerve block treatment group was not associated with SF-36 and KOS scores after surgery (all with P≥0.05). The models showed that the physical component summary of the SF-36 (P < 0.0001) and the KOS total score (P < 0.0001) increased (improved) over time after surgery and were also influenced by baseline scores. Conclusions After spinal anesthesia and multimodal analgesia for ACLR, the nerve block treatment group did not predict SF-36 or knee function outcomes from 7 days to 12 weeks after surgery. Further research is needed to determine whether these conclusions also apply to a nonstandardized anesthetic, or one that includes general anesthesia and/or high-dose opioid analgesia. PMID:19299803
A method for estimating mean and low flows of streams in national forests of Montana
Parrett, Charles; Hull, J.A.
1985-01-01
Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)
NASA Astrophysics Data System (ADS)
Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka
2017-07-01
This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.
An Alternative to the Stay/Switch Equation Assessed When Using a Changeover-Delay
MacDonall, James S.
2015-01-01
An alternative to the generalized matching equation for understanding concurrent performances is the stay/switch model. For the stay/switch model, the important events are the contingencies and behaviors at each alternative. The current experiment compares the descriptions by two stay/switch equations, the original, empirically derived stay/switch equation and a more theoretically derived equation based on ratios of stay to switch responses matching ratios of stay to switch reinforcers. The present experiment compared descriptions by the original stay/switch equation when using and not using a changeover delay. It also compared descriptions by the more theoretical equation with and without a changeover delay. Finally, it compared descriptions of the concurrent performances by these two equations. Rats were trained in 15 conditions on identical concurrent random-interval schedules in each component of a multiple schedule. A COD operated in only one component. There were no consistent differences in the variance accounted for by each equation of concurrent performances whether or not a COD was used. The simpler equation found greater sensitivity to stay than to switch reinforcers. It also found a COD eliminated the influence of switch reinforcers. Because estimates of parameters were more meaningful when using the more theoretical stay/switch equation it is preferred. PMID:26299548
An alternative to the stay/switch equation assessed when using a changeover-delay.
MacDonall, James S
2015-11-01
An alternative to the generalized matching equation for understanding concurrent performances is the stay/switch model. For the stay/switch model, the important events are the contingencies and behaviors at each alternative. The current experiment compares the descriptions by two stay/switch equations, the original, empirically derived stay/switch equation and a more theoretically derived equation based on ratios of stay to switch responses matching ratios of stay to switch reinforcers. The present experiment compared descriptions by the original stay/switch equation when using and not using a changeover delay. It also compared descriptions by the more theoretical equation with and without a changeover delay. Finally, it compared descriptions of the concurrent performances by these two equations. Rats were trained in 15 conditions on identical concurrent random-interval schedules in each component of a multiple schedule. A COD operated in only one component. There were no consistent differences in the variance accounted for by each equation of concurrent performances whether or not a COD was used. The simpler equation found greater sensitivity to stay than to switch reinforcers. It also found a COD eliminated the influence of switch reinforcers. Because estimates of parameters were more meaningful when using the more theoretical stay/switch equation it is preferred. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2018-01-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.
David C. Chojnacky; Jennifer C. Jenkins; Amanda K. Holland
2009-01-01
Thousands of published equations purport to estimate biomass of individual trees. These equations are often based on very small samples, however, and can provide widely different estimates for trees of the same species. We addressed this issue in a previous study by devising 10 new equations that estimated total aboveground biomass for all species in North America (...
Magnitude and Frequency of Floods for Urban and Small Rural Streams in Georgia, 2008
Gotvald, Anthony J.; Knaak, Andrew E.
2011-01-01
A study was conducted that updated methods for estimating the magnitude and frequency of floods in ungaged urban basins in Georgia that are not substantially affected by regulation or tidal fluctuations. Annual peak-flow data for urban streams from September 2008 were analyzed for 50 streamgaging stations (streamgages) in Georgia and 6 streamgages on adjacent urban streams in Florida and South Carolina having 10 or more years of data. Flood-frequency estimates were computed for the 56 urban streamgages by fitting logarithms of annual peak flows for each streamgage to a Pearson Type III distribution. Additionally, basin characteristics for the streamgages were computed by using a geographical information system and computer algorithms. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged urban basins in Georgia. In addition to the 56 urban streamgages, 171 rural streamgages were included in the regression analysis to maintain continuity between flood estimates for urban and rural basins as the basin characteristics pertaining to urbanization approach zero. Because 21 of the rural streamgages have drainage areas less than 1 square mile, the set of equations developed for this study can also be used for estimating small ungaged rural streams in Georgia. Flood-frequency estimates and basin characteristics for 227 streamgages were combined to form the final database used in the regional regression analysis. Four hydrologic regions were developed for Georgia. The final equations are functions of drainage area and percentage of impervious area for three of the regions and drainage area, percentage of developed land, and mean basin slope for the fourth region. Average standard errors of prediction for these regression equations range from 20.0 to 74.5 percent.
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
Quasi-Newton methods for parameter estimation in functional differential equations
NASA Technical Reports Server (NTRS)
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
A general diagram for estimating pore size of ultrafiltration and reverse osmosis membranes
NASA Technical Reports Server (NTRS)
Sarbolouki, M. N.
1982-01-01
A slit sieve model has been used to develop a general correlation between the average pore size of the upstream surface of a membrane and the molecular weight of the solute which it retains by better than 80%. The pore size is determined by means of the correlation using the high retention data from an ultrafiltration (UF) or a reverse osmosis (RO) experiment. The pore population density can also be calculated from the flux data via appropriate equations.
Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.
1981-01-01
The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.
Liu, Jingxia; Colditz, Graham A
2018-05-01
There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
On parametrised cold dense matter equation of state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-04-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
ERIC Educational Resources Information Center
Haines, Amanda; Spruance, Lori Andersen
2018-01-01
Purpose/Objectives: The study aim was to evaluate parent support for breakfast after the bell programs (BABPs). Methods: Data were collected through an online survey from parents (n=488) of school-aged children enrolled in public schools in Utah. Data were analyzed using generalized estimating equation (GEE) regression methods. Results: Parents…
Effect of Changes in Sleep Quantity and Quality on Depressive Symptoms among Korean Children
ERIC Educational Resources Information Center
Lee, Joo Eun; Park, Sohee; Nam, Jin-Young; Ju, Young Jun; Park, Eun-Cheol
2017-01-01
This study aims to determine whether changes in sleep quantity and quality in childhood are associated with incidence of depressive symptoms. We used the three waves of the Korean Children & Youth Panel Survey (2011-2013). Statistical analysis using a generalized estimating equation model was performed. The 2,605 subjects analyzed included…
ERIC Educational Resources Information Center
Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio
2016-01-01
Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…
ERIC Educational Resources Information Center
Conner, Kenneth R.; Meldrum, Sean; Wieczorek, William F.; Duberstein, Paul R.; Welte, John W.
2004-01-01
Information on the association of impulsivity and measures of aggression with suicidal ideation in adolescents and young adults is limited. Data were gathered from a community sample of 625 adolescent and young adult males. Analyses were based on multivariate generalized estimating equations. Impulsivity and irritability were associated strongly…
ERIC Educational Resources Information Center
Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer
2013-01-01
Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…
ERIC Educational Resources Information Center
Feingold, Alan
2009-01-01
The use of growth-modeling analysis (GMA)--including hierarchical linear models, latent growth models, and general estimating equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the…
ERIC Educational Resources Information Center
Lee, Chung Gun; Seo, Dong-Chul; Torabi, Mohammad R.; Lohrmann, David K.; Song, Tae Min
2018-01-01
Background: We examined the longitudinal trajectory of substance use (binge drinking, marijuana use, and cocaine use) in relation to self-esteem from adolescence to young adulthood. Methods: Generalized estimating equation models were fit using SAS to investigate changes in the relation between self-esteem and each substance use (binge drinking,…
Calculating the True and Observed Rates of Complex Heterogeneous Catalytic Reactions
NASA Astrophysics Data System (ADS)
Avetisov, A. K.; Zyskin, A. G.
2018-06-01
Equations of the theory of steady-state complex reactions are considered in matrix form. A set of stage stationarity equations is given, and an algorithm is described for deriving the canonic set of stationarity equations with appropriate corrections for the existence of fast stages in a mechanism. A formula for calculating the number of key compounds is presented. The applicability of the Gibbs rule to estimating the number of independent compounds in a complex reaction is analyzed. Some matrix equations relating the rates of dependent and key substances are derived. They are used as a basis to determine the general diffusion stoichiometry relationships between temperature, the concentrations of dependent reaction participants, and the concentrations of key reaction participants in a catalyst grain. An algorithm is described for calculating heat and mass transfer in a catalyst grain with respect to arbitrary complex heterogeneous catalytic reactions.
Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Taylor, Arthur C., III
1994-01-01
This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.
Stable Algorithm For Estimating Airdata From Flush Surface Pressure Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen, A. (Inventor); Cobleigh, Brent R. (Inventor); Haering, Edward A., Jr. (Inventor)
2001-01-01
An airdata estimation and evaluation system and method, including a stable algorithm for estimating airdata from nonintrusive surface pressure measurements. The airdata estimation and evaluation system is preferably implemented in a flush airdata sensing (FADS) system. The system and method of the present invention take a flow model equation and transform it into a triples formulation equation. The triples formulation equation eliminates the pressure related states from the flow model equation by strategically taking the differences of three surface pressures, known as triples. This triples formulation equation is then used to accurately estimate and compute vital airdata from nonintrusive surface pressure measurements.
Predictive Variables of Half-Marathon Performance for Male Runners.
Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A; García-López, Juan
2017-06-01
The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO 2max , speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance.
Christ, Sharon L; Lee, David J; Lam, Byron L; Zheng, D Diane; Arheart, Kristopher L
2008-08-01
To estimate the direct effects of self-reported visual impairment (VI) on health, disability, and mortality and to estimate the indirect effects of VI on mortality through health and disability mediators. The National Health Interview Survey (NHIS) is a population-based annual survey designed to be representative of the U.S. civilian noninstitutionalized population. The National Death Index of 135,581 NHIS adult participants, 18 years of age and older, from 1986 to 1996 provided the mortality linkage through 2002. A generalized linear structural equation model (GSEM) with latent variable was used to estimate the results of a system of equations with various outcomes. Standard errors and test statistics were corrected for weighting, clustering, and stratification. VI affects mortality, when direct adjustment was made for the covariates. Severe VI increases the hazard rate by a factor of 1.28 (95% CI: 1.07-1.53) compared with no VI, and some VI increases the hazard by a factor of 1.13 (95% CI: 1.07-1.20). VI also affects mortality indirectly through self-rated health and disability. The total effects (direct effects plus mediated effects) on the hazard of mortality of severe VI and some VI relative to no VI are hazard ratio (HR) 1.54 (95% CI: 1.28-1.86) and HR 1.23 (95% CI: 1.16-1.31), respectively. In addition to the direct link between VI and mortality, the effects of VI on general health and disability contribute to an increased risk of death. Ignoring the latter may lead to an underestimation of the substantive impact of VI on mortality.
NASA Astrophysics Data System (ADS)
Dymond, J. H.; Young, K. J.; Isdale, J. D.
1980-12-01
Viscosity coefficients measured with an estimated accuracy of 2% using a self-centering falling body viscometer are reported for n-hexane, n-hexadecane, and four binary mixtures at 25, 50, 75, and 100‡C at pressures up to the freezing pressure or 500 MPa. The data for a given composition at different temperatures and pressures are very satisfactorily correlated by a plot of Ή, defined as 104 ηV 2/3/( MT)1/2 in the cgs system of units, or generally, 9.118×107 η V 2/3/( MRT)1/2, versus log V', as suggested by the hard-sphere theories, where V' = V · V 0( T R)/ V 0( T) and V 0 represents the close-packed volume at temperature T and reference temperature T R . The experimental results for all compositions are fitted, generally well within the estimated uncertainty, by the equation 1 10765_2004_Article_BF00516563_TeX2GIFE1.gif ln η ' = {text{ - 1}}{text{.0 + }}{BV_0 }/{V - V_0 } where B and V 0 are temperature and composition dependent. Values of B and V 0 for the mixtures are simply related to values for the pure liquids, and viscosity coefficients calculated on the basis of this equation have an estimated accuracy of 3%. The effectiveness of the recently recommended empirical Grunberg and Nissan equation is investigated. It is found that the parameter G is pressure dependent, as well as composition dependent, but is practically temperature independent.
Lima, Robson B DE; Alves, Francisco T; Oliveira, Cinthia P DE; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Dry tropical forests are a key component in the global carbon cycle and their biomass estimates depend almost exclusively of fitted equations for multi-species or individual species data. Therefore, a systematic evaluation of statistical models through validation of estimates of aboveground biomass stocks is justifiable. In this study was analyzed the capacity of generic and specific equations obtained from different locations in Mexico and Brazil, to estimate aboveground biomass at multi-species levels and for four different species. Generic equations developed in Mexico and Brazil performed better in estimating tree biomass for multi-species data. For Poincianella bracteosa and Mimosa ophthalmocentra, only the Sampaio and Silva (2005) generic equation was the most recommended. These equations indicate lower tendency and lower bias, and biomass estimates for these equations are similar. For the species Mimosa tenuiflora, Aspidosperma pyrifolium and for the genus Croton the specific regional equations are more recommended, although the generic equation of Sampaio and Silva (2005) is not discarded for biomass estimates. Models considering gender, families, successional groups, climatic variables and wood specific gravity should be adjusted, tested and the resulting equations should be validated at both local and regional levels as well as on the scales of tropics with dry forest dominance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott Stewart, D., E-mail: dss@illinois.edu; Hernández, Alberto; Lee, Kibaek
The estimation of pressure and temperature histories, which are required to understand chemical pathways in condensed phase explosives during detonation, is discussed. We argue that estimates made from continuum models, calibrated by macroscopic experiments, are essential to inform modern, atomistic-based reactive chemistry simulations at detonation pressures and temperatures. We present easy to implement methods for general equation of state and arbitrarily complex chemical reaction schemes that can be used to compute reactive flow histories for the constant volume, the energy process, and the expansion process on the Rayleigh line of a steady Chapman-Jouguet detonation. A brief review of state-of-the-art ofmore » two-component reactive flow models is given that highlights the Ignition and Growth model of Lee and Tarver [Phys. Fluids 23, 2362 (1980)] and the Wide-Ranging Equation of State model of Wescott, Stewart, and Davis [J. Appl. Phys. 98, 053514 (2005)]. We discuss evidence from experiments and reactive molecular dynamic simulations that motivate models that have several components, instead of the two that have traditionally been used to describe the results of macroscopic detonation experiments. We present simplified examples of a formulation for a hypothetical explosive that uses simple (ideal) equation of state forms and detailed comparisons. Then, we estimate pathways computed from two-component models of real explosive materials that have been calibrated with macroscopic experiments.« less
Estimating mercury exposure of piscivorous birds and sport fish using prey fish monitoring
Ackerman, Joshua T.; Hartman, C. Alex; Eagles-Smith, Collin A.; Herzog, Mark P.; Davis, Jay; Ichikawa, Gary; Bonnema, Autumn
2015-01-01
Methylmercury is a global pollutant of aquatic ecosystems, and monitoring programs need tools to predict mercury exposure of wildlife. We developed equations to estimate methylmercury exposure of piscivorous birds and sport fish using mercury concentrations in prey fish. We collected original data on western grebes (Aechmophorus occidentalis) and Clark’s grebes (Aechmophorus clarkii) and summarized the published literature to generate predictive equations specific to grebes and a general equation for piscivorous birds. We measured mercury concentrations in 354 grebes (blood averaged 1.06 ± 0.08 μg/g ww), 101 grebe eggs, 230 sport fish (predominantly largemouth bass and rainbow trout), and 505 prey fish (14 species) at 25 lakes throughout California. Mercury concentrations in grebe blood, grebe eggs, and sport fish were strongly related to mercury concentrations in prey fish among lakes. Each 1.0 μg/g dw (∼0.24 μg/g ww) increase in prey fish resulted in an increase in mercury concentrations of 103% in grebe blood, 92% in grebe eggs, and 116% in sport fish. We also found strong correlations between mercury concentrations in grebes and sport fish among lakes. Our results indicate that prey fish monitoring can be used to estimate mercury exposure of piscivorous birds and sport fish when wildlife cannot be directly sampled.
NASA Astrophysics Data System (ADS)
Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.
2018-05-01
In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.
Wiley, Jeffrey B.; Atkins, John T.; Newell, Dawn A.
2002-01-01
Multiple and simple least-squares regression models for the log10-transformed 1.5- and 2-year recurrence intervals of peak discharges with independent variables describing the basin characteristics (log10-transformed and untransformed) for 236 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions in West Virginia designated as East, North, and South. Regional equations for the 1.1-, 1.2-, 1.3-, 1.4-, 1.5-, 1.6-, 1.7-, 1.8-, 1.9-, 2.0-, 2.5-, and 3-year recurrence intervals of peak discharges were determined by generalized least-squares regression. Log10-transformed drainage area was the most significant independent variable for all regions. Equations developed in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia. The accuracies of estimating equations are quantified by measuring the average prediction error (from 27.4 to 52.4 percent) and equivalent years of record (from 1.1 to 3.4 years).
Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch
2016-07-01
We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less
Cabrerizo-García, José Luis; Díez-Manglano, Jesús; García-Arilla, Ernesto; Revillo-Pinilla, Paz; Ramón-Puertas, José; Sebastián-Royo, Mariano
2015-01-06
The Modification of Diet in Renal Disease (MDRD) equation is recommended by most scientific societies to calculate the estimated glomerular filtration rate (GFR). Recently the group Chronic Kidney Disease Epidemiology Collaboration (CKP-EPI) has published a new, more precise and accurate equation. We have analyzed its behavior in a group of polypathological patients (PP) and compared it with the classic MDRD-4.version Multicenter, observational, descriptive and transversal study. We calculated GFR by MDRD-4 and CKD-EPI in 425 PP. Each stage was assigned according to the GFR: 1:>90; 2: 60-89; 3: 30-59; 4: 15-29; and 5 < 15 ml/min/1.73m(2). We analyzed the correlation between both and the patients reclassified by CKD-EPI. Mean age was (mean [SD]) 81.7 (7.9) years. 55.3% were women. The mean estimated GFR was 58.6 (26.3) ml/min/1,73m(2) by MDRD-4 and 52.7 (23.0) ml/min/1.73m(2) by CKD-EPI (P<.001; Spearman's Rho correlation and Lin concordance coefficients: 0.993 and 0.948). The Bland-Altman plots reflected lower values for GFR for CKD-EPI equation. In the stage 2, 21.2% were reclassified by CKD-EPI to the stage 3, with women older than 83 years being the more disadvantaged subgroup with 27.3% or reclassification. CKD-EPI equation applied to PP worsens the results of MDRD-4. In general, it originates low values of GFR and increases the degree of renal insufficiency, especially in older women. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.
On the solution of the generalized wave and generalized sine-Gordon equations
NASA Technical Reports Server (NTRS)
Ablowitz, M. J.; Beals, R.; Tenenblat, K.
1986-01-01
The generalized wave equation and generalized sine-Gordon equations are known to be natural multidimensional differential geometric generalizations of the classical two-dimensional versions. In this paper, a system of linear differential equations is associated with these equations, and it is shown how the direct and inverse problems can be solved for appropriately decaying data on suitable lines. An initial-boundary value problem is solved for these equations.
Comparison of methods for estimating carbon dioxide storage by Sacramento's urban forest
Elena Aguaron; E. Gregory McPherson
2012-01-01
Limited open-grown urban tree species biomass equations have necessitated use of forest-derived equations with diverse conclusions on the accuracy of these equations to estimate urban biomass and carbon storage. Our goal was to determine and explain variability among estimates of CO2 storage from four sets of allometric equations for the same...
Mindikoglu, Ayse L.; Dowling, Thomas C.; Weir, Matthew R.; Seliger, Stephen L.; Christenson, Robert H.; Magder, Laurence S.
2013-01-01
Conventional creatinine-based glomerular filtration rate (GFR) equations are insufficiently accurate for estimating GFR in cirrhosis. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) recently proposed an equation to estimate GFR in subjects without cirrhosis using both serum creatinine and cystatin C levels. Performance of the new CKD-EPI creatinine-cystatin C equation (2012) was superior to previous creatinine- or cystatin C-based GFR equations. To evaluate the performance of the CKD-EPI creatinine-cystatin C equation in subjects with cirrhosis, we compared it to GFR measured by non-radiolabeled iothalamate plasma clearance (mGFR) in 72 subjects with cirrhosis. We compared the “bias”, “precision” and “accuracy” of the new CKD-EPI creatinine-cystatin C equation to that of 24-hour urinary creatinine clearance (CrCl), Cockcroft-Gault (CG) and previously reported creatinine- and/or cystatin C-based GFR-estimating equations. Accuracy of CKD-EPI creatinine-cystatin C equation as quantified by root mean squared error of difference scores [differences between mGFR and estimated GFR (eGFR) or between mGFR and CrCl, or between mGFR and CG equation for each subject] (RMSE=23.56) was significantly better than that of CrCl (37.69, P=0.001), CG (RMSE=36.12, P=0.002) and GFR-estimating equations based on cystatin C only. Its accuracy as quantified by percentage of eGFRs that differed by greater than 30% with respect to mGFR was significantly better compared to CrCl (P=0.024), CG (P=0.0001), 4-variable MDRD (P=0.027) and CKD-EPI creatinine 2009 (P=0.012) equations. However, for 23.61% of the subjects, GFR estimated by CKD-EPI creatinine-cystatin C equation differed from the mGFR by more than 30%. CONCLUSIONS The diagnostic performance of CKD-EPI creatinine-cystatin C equation (2012) in patients with cirrhosis was superior to conventional equations in clinical practice for estimating GFR. However, its diagnostic performance was substantially worse than reported in subjects without cirrhosis. PMID:23744636
Seismic waves in a self-gravitating planet
NASA Astrophysics Data System (ADS)
Brazda, Katharina; de Hoop, Maarten V.; Hörmann, Günther
2013-04-01
The elastic-gravitational equations describe the propagation of seismic waves including the effect of self-gravitation. We rigorously derive and analyze this system of partial differential equations and boundary conditions for a general, uniformly rotating, elastic, but aspherical, inhomogeneous, and anisotropic, fluid-solid earth model, under minimal assumptions concerning the smoothness of material parameters and geometry. For this purpose we first establish a consistent mathematical formulation of the low regularity planetary model within the framework of nonlinear continuum mechanics. Using calculus of variations in a Sobolev space setting, we then show how the weak form of the linearized elastic-gravitational equations directly arises from Hamilton's principle of stationary action. Finally we prove existence and uniqueness of weak solutions by the method of energy estimates and discuss additional regularity properties.
Equations for estimating selected streamflow statistics in Rhode Island
Bent, Gardner C.; Steeves, Peter A.; Waite, Andrew M.
2014-01-01
The equations, which are based on data from streams with little to no flow alterations, will provide an estimate of the natural flows for a selected site. They will not estimate flows for altered sites with dams, surface-water withdrawals, groundwater withdrawals (pumping wells), diversions, and wastewater discharges. If the equations are used to estimate streamflow statistics for altered sites, the user should adjust the flow estimates for the alterations. The regression equations should be used only for ungaged sites with drainage areas between 0.52 and 294 square miles and stream densities between 0.94 and 3.49 miles per square mile; these are the ranges of the explanatory variables in the equations.
Painter, Colin C.; Heimann, David C.; Lanning-Rush, Jennifer L.
2017-08-14
A study was done by the U.S. Geological Survey in cooperation with the Kansas Department of Transportation and the Federal Emergency Management Agency to develop regression models to estimate peak streamflows of annual exceedance probabilities of 50, 20, 10, 4, 2, 1, 0.5, and 0.2 percent at ungaged locations in Kansas. Peak streamflow frequency statistics from selected streamgages were related to contributing drainage area and average precipitation using generalized least-squares regression analysis. The peak streamflow statistics were derived from 151 streamgages with at least 25 years of streamflow data through 2015. The developed equations can be used to predict peak streamflow magnitude and frequency within two hydrologic regions that were defined based on the effects of irrigation. The equations developed in this report are applicable to streams in Kansas that are not substantially affected by regulation, surface-water diversions, or urbanization. The equations are intended for use for streams with contributing drainage areas ranging from 0.17 to 14,901 square miles in the nonirrigation effects region and, 1.02 to 3,555 square miles in the irrigation-affected region, corresponding to the range of drainage areas of the streamgages used in the development of the regional equations.
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Least squares techniques were applied for parameter estimation of functions to predict winter wheat phenological stage with daily maximum temperature, minimum temperature, daylength, and precipitation as independent variables. After parameter estimation, tests were conducted using independent data. It may generally be concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson triquadratic form, in general use for spring wheat, yielded good results, but special techniques and care are required. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with averaged daily environmental values as independent variables.
Using surface impedance for calculating wakefields in flat geometry
Bane, Karl; Stupakov, Gennady
2015-03-18
Beginning with Maxwell's equations and assuming only that the wall interaction can be approximated by a surface impedance, we derive formulas for the generalized longitudinal and transverse impedance in flat geometry, from which the wakefields can also be obtained. From the generalized impedances, by taking the proper limits, we obtain the normal longitudinal, dipole, and quad impedances in flat geometry. These equations can be applied to any surface impedance, such as the known dc, ac, and anomalous skin models of wall resistance, a model of wall roughness, or one for a pipe with small, periodic corrugations. We show that, formore » the particular case of dc wall resistance, the longitudinal impedance obtained here agrees with a known result in the literature, a result that was derived from a very general formula by Henke and Napoly. As an example, we apply our results to representative beam and machine parameters in the undulator region of LCLS-II and estimate the impact of the transverse wakes on the machine performance.« less
Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.
Li, Shuai; Li, Yangming
2013-10-28
The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.
Estimation of potential bridge scour at bridges on state routes in South Dakota, 2003-07
Thompson, Ryan F.; Fosness, Ryan L.
2008-01-01
Flowing water can erode (scour) soils and cause structural failure of a bridge by exposing or undermining bridge foundations (abutments and piers). A rapid scour-estimation technique, known as the level-1.5 method and developed by the U.S. Geological Survey, was used to evaluate potential scour at bridges in South Dakota in a study conducted in cooperation with the South Dakota Department of Transportation. This method was used during 2003-07 to estimate scour for the 100-year and 500-year floods at 734 selected bridges managed by the South Dakota Department of Transportation on State routes in South Dakota. Scour depths and other parameters estimated from the level-1.5 analyses are presented in tabular form. Estimates of potential contraction scour at the 734 bridges ranged from 0 to 33.9 feet for the 100-year flood and from 0 to 35.8 feet for the 500-year flood. Abutment scour ranged from 0 to 36.9 feet for the 100-year flood and from 0 to 45.9 feet for the 500-year flood. Pier scour ranged from 0 to 30.8 feet for the 100-year flood and from 0 to 30.7 feet for the 500-year flood. The scour depths estimated by using the level-1.5 method can be used by the South Dakota Department of Transportation and others to identify bridges that may be susceptible to scour. Scour at 19 selected bridges also was estimated by using the level-2 method. Estimates of contraction, abutment, and pier scour calculated by using the level-1.5 and level-2 methods are presented in tabular and graphical formats. Compared to level-2 scour estimates, the level-1.5 method generally overestimated scour as designed, or in a few cases slightly underestimated scour. Results of the level-2 analyses were used to develop regression equations for change in head and average velocity through the bridge opening. These regression equations derived from South Dakota data are compared to similar regression equations derived from Montana and Colorado data. Future level-1.5 scour investigations in South Dakota may benefit from the use of these South Dakota-specific regression equations for estimating change in stream head and average velocity at the bridge.
Herschlag, Gregory J; Mitran, Sorin; Lin, Guang
2015-06-21
We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.
Flood-frequency characteristics of Wisconsin streams
Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.
2017-05-22
Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.
Estimation of nonlinear pilot model parameters including time delay.
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.; Wells, W. R.
1972-01-01
Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.
Stochastic estimates of gradient from laser measurements for an autonomous Martian roving vehicle
NASA Technical Reports Server (NTRS)
Burger, P. A.
1973-01-01
The general problem of estimating the state vector x from the state equation h = Ax where h, A, and x are all stochastic, is presented. Specifically, the problem is for an autonomous Martian roving vehicle to utilize laser measurements in estimating the gradient of the terrain. Error exists due to two factors - surface roughness and instrumental measurements. The errors in slope depend on the standard deviations of these noise factors. Numerically, the error in gradient is expressed as a function of instrumental inaccuracies. Certain guidelines for the accuracy of permissable gradient must be set. It is found that present technology can meet these guidelines.
Mirabelli, Maria C; Preisser, John S; Loehr, Laura R; Agarwal, Sunil K; Barr, R Graham; Couper, David J; Hankinson, John L; Hyun, Noorie; Folsom, Aaron R; London, Stephanie J
2016-04-01
Interpretation of longitudinal information about lung function decline from middle to older age has been limited by loss to follow-up that may be correlated with baseline lung function or the rate of decline. We conducted these analyses to estimate age-related decline in lung function across groups of race, sex, and smoking status while accounting for dropout from the Atherosclerosis Risk in Communities Study. We analyzed data from 13,896 black and white participants, aged 45-64 years at the 1987-1989 baseline clinical examination. Using spirometry data collected at baseline and two follow-up visits, we estimated annual population-averaged mean changes in forced expiratory volume in one second (FEV1) and forced vital capacity (FVC) by race, sex, and smoking status using inverse-probability-weighted independence estimating equations conditioning-on-being-alive. Estimated rates of FEV1 decline estimated using inverse-probability-weighted independence estimating equations conditioning on being alive were higher among white than black participants at age 45 years (e.g., male never smokers: black: -29.5 ml/year; white: -51.9 ml/year), but higher among black than white participants by age 75 (black: -51.2 ml/year; white: -26). Observed differences by race were more pronounced among men than among women. By smoking status, FEV1 declines were larger among current than former or never smokers at age 45 across all categories of race and sex. By age 60, FEV1 decline was larger among former and never than current smokers. Estimated annual declines generated using unweighted generalized estimating equations were smaller for current smokers at younger ages in all four groups of race and sex compared with results from weighted analyses that accounted for attrition. Using methods accounting for dropout from an approximately 25-year health study, estimated rates of lung function decline varied by age, race, sex, and smoking status, with largest declines observed among current smokers at younger ages. Published by Elsevier Ltd.
Considerations of "Combined Probability of Injury" in the next-generation USA frontal NCAP.
Laituri, Tony R; Henry, Scott; Sullivan, Kaye; Nutt, Marvin
2010-08-01
The numerical basis for assigning star ratings in the next-generation USA New Car Assessment Program (NCAP) for frontal impacts was assessed. That basis, the Combined Probability of Injury, or CPI, is the probability of an occupant sustaining an injury to any of the specified body regions. For an NCAP test, a CPI value is computed by (a) using risk curves to convert body-region responses from a test dummy into body-region risks and (b) using a theoretical, overarching CPI equation to convert those separate body-region risks into a single CPI value. Though the general concept of applying a CPI equation to assign star ratings has existed since 1994, there will be numerous changes to the 2011 frontal NCAP: there will be two additional body regions (n = 4 vs. 2), the injury probabilities will be evaluated for lower-severity (more likely) injury levels, and some of the occupant responses will change. These changes could yield more disperse CPIs that could yield more disperse ratings. However, the reasons for this increased dispersion should be consistent with real-world findings. Related assessments were the topic of this two-part study, focused on drivers. In Part 1, the CPI equation was assessed without applying risk curves. Specifically, field injury probabilities for the four body regions were used as inputs to the CPI equation, and the resulting equation-produced CPIs were compared with the field CPIs. In Part 2, subject to analyses of test dummy responses from recent NCAP tests, the effect of risk curve choice on CPIs was assessed. Specifically, dispersion statistics were compared for CPIs based on various underlying risk curves applied to data from 2001-2005 model year vehicles (n = 183). From Part 1, the theoretical CPI equation for four body regions demonstrated acceptable fidelity when provided field injury rates (R(2)= 0.92), with the equation-based CPIs being approximately 12 percent lower than those of ideal correlation. From Part 2, the 2011 NCAP protocol (i.e., application of a four-body-region CPI equation whose inputs were from risk curves) generally increased both the CPIs and their dispersion relative to the current NCAP protocol. However, the CPIs generally increased due to an emphasis on neck injury-an emphasis not observed in real-world crashes. Subject to alternative risk curves for the neck and chest, again there was increased dispersion of the CPIs, but the unrealistic emphasis on the neck was eliminated. However, risk estimates for the knee/thigh/hip (KTH) for NCAP-type events remained understated and did not fall within the confidence bands of the field data. Accordingly, KTH risk estimation is an area for future research.
Evaluation of equations that estimate glomerular filtration rate in renal transplant recipients.
De Alencastro, M G; Veronese, F V; Vicari, A R; Gonçalves, L F; Manfro, R C
2014-03-01
The accuracy of equations that estimate the glomerular filtration rate (GFR) in renal transplant patients has not been established; thus their performance was assessed in stable renal transplant patients. Renal transplant patients (N.=213) with stable graft function were enrolled. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation was used as the reference method and compared with the Cockcroft-Gault (CG), Modification of Diet in Renal Disease (MDRD), Mayo Clinic (MC) and Nankivell equations. Bias, accuracy and concordance rates were determined for all equation relative to CKD-EPI. Mean estimated GFR values of the equations differed significantly from the CKD-EPI values, though the correlations with the reference method were significant. Values of MDRD differed from the CG, MC and Nankivell estimations. The best agreement to classify the chronic kidney disease (CKD) stages was for the MDRD (Kappa=0.649, P<0.001), and for the other equations the agreement was moderate. The MDRD had less bias and narrower agreement limits but underestimated the GFR at levels above 60 mL/min/1.73 m2. Conversely, the CG, MC and Nankivell equations overestimated the GFR, and the Nankivell equation had the worst performance. The MDRD equation P15 and P30 values were higher than those of the other equations (P<0.001). Despite their correlations, equations estimated the GFR and CKD stage differently. The MDRD equation was the most accurate, but the sub-optimal performance of all the equations precludes their accurate use in clinical practice.
Mohr, Nicholas M; Harland, Karisa K; Shane, Dan M; Ahmed, Azeemuddin; Fuller, Brian M; Torner, James C
2016-12-01
The objective of this study was to evaluate the impact of regionalization on sepsis survival, to describe the role of inter-hospital transfer in rural sepsis care, and to measure the cost of inter-hospital transfer in a predominantly rural state. Observational case-control study using statewide administrative claims data from 2005 to 2014 in a predominantly rural Midwestern state. Mortality and marginal costs were estimated with multivariable generalized estimating equations models and with instrumental variables models. A total of 18 246 patients were included, of which 59% were transferred between hospitals. Transferred patients had higher mortality and longer hospital length-of-stay than non-transferred patients. Using a multivariable generalized estimating equations (GEE) model to adjust for potentially confounding factors, inter-hospital transfer was associated with increased mortality (aOR 1.7, 95% CI 1.5-1.9). Using an instrumental variables model, transfer was associated with a 9.2% increased risk of death. Transfer was associated with additional costs of $6897 (95% CI $5769-8024). Even when limiting to only those patients who received care in the largest hospitals, transfer was still associated with $5167 (95% CI $3696-6638) in additional cost. The majority of rural sepsis patients are transferred, and these transferred patients have higher mortality and significantly increased cost of care. Copyright © 2016 Elsevier Inc. All rights reserved.
Using exogenous variables in testing for monotonic trends in hydrologic time series
Alley, William M.
1988-01-01
One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Dichotomies for generalized ordinary differential equations and applications
NASA Astrophysics Data System (ADS)
Bonotto, E. M.; Federson, M.; Santos, F. L.
2018-03-01
In this work we establish the theory of dichotomies for generalized ordinary differential equations, introducing the concepts of dichotomies for these equations, investigating their properties and proposing new results. We establish conditions for the existence of exponential dichotomies and bounded solutions. Using the correspondences between generalized ordinary differential equations and other equations, we translate our results to measure differential equations and impulsive differential equations. The fact that we work in the framework of generalized ordinary differential equations allows us to manage functions with many discontinuities and of unbounded variation.
Estimation procedures for understory biomass and fuel loads in sagebrush steppe invaded by woodlands
Alicia L. Reiner; Robin J. Tausch; Roger F. Walker
2010-01-01
Regression equations were developed to predict biomass for 9 shrubs, 9 grasses, and 10 forbs that generally dominate sagebrush ecosystems in central Nevada. Independent variables included percent cover, average height, and plant volume. We explored 2 ellipsoid volumes: one with maximum plant height and 2 crown diameters and another with live crown height and 2 crown...
Relationship Satisfaction and Dyadic Coping in Couples with a Child with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Sim, Angela; Cordier, Reinie; Vaz, Sharmila; Parsons, Richard; Falkmer, Torbjörn
2017-01-01
Dyadic coping strategies may play a pivotal role in relationship satisfaction and explain why some couples adapt positively to the challenges associated with raising a child with ASD and others do not. Survey data from 127 caregivers of a child with ASD were used in generalized estimating equation analyses to investigate the factors associated…
Eric H. Wharton; Douglas M. Griffith
1993-01-01
Presents methods for synthesizing information from existing biomass literature when making biomass assessments over extensive geographic areas, such as for a state or region. Described are general applications to the northeastern United States, and specific applications to Ohio. Tables of appropriate regression equations and the tree and shrub species to which these...
ERIC Educational Resources Information Center
Springvloet, L.; Willemsen, M. C.; Mons, U.; van den Putte, B.; Kunst, A. E.; Guignard, R.; Hummel, K.; Allwright, S.; Siahpush, M.; de Vries, H.; Nagelhout, G. E.
2015-01-01
This study examined educational differences in associations of noticing anti-tobacco information with smoking-related attitudes and quit intentions among adult smokers. Longitudinal data (N = 7571) from two waves of six countries of the International Tobacco Control (ITC) Europe Surveys were included. Generalized estimating equation analyses and…
The Interface Between Theory and Data in Structural Equation Models
Grace, James B.; Bollen, Kenneth A.
2006-01-01
Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite, for representing general concepts. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling general relationships of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially reduced form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influences of suites of variables are often of interest.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
Stable boundary conditions and difference schemes for Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Dutt, P.
1985-01-01
The Navier-Stokes equations can be viewed as an incompletely elliptic perturbation of the Euler equations. By using the entropy function for the Euler equations as a measure of energy for the Navier-Stokes equations, it was possible to obtain nonlinear energy estimates for the mixed initial boundary value problem. These estimates are used to derive boundary conditions which guarantee L2 boundedness even when the Reynolds number tends to infinity. Finally, a new difference scheme for modelling the Navier-Stokes equations in multidimensions for which it is possible to obtain discrete energy estimates exactly analogous to those we obtained for the differential equation was proposed.
Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.
Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie
2018-05-01
Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Benguria, Rafael D.; Depassier, M. Cristina; Loss, Michael
2012-12-01
We study the effect of a cutoff on the speed of pulled fronts of the one-dimensional reaction diffusion equation. To accomplish this, we first use variational techniques to prove the existence of a heteroclinic orbit in phase space for traveling wave solutions of the corresponding reaction diffusion equation under conditions that include discontinuous reaction profiles. This existence result allows us to prove rigorous upper and lower bounds on the minimal speed of monotonic fronts in terms of the cut-off parameter ɛ. From these bounds we estimate the range of validity of the Brunet-Derrida formula for a general class of reaction terms.
Macroscopic behavior and fluctuation-dissipation response of stochastic ecohydrological systems
NASA Astrophysics Data System (ADS)
Porporato, A. M.
2017-12-01
The coupled dynamics of water, carbon and nutrient cycles in ecohydrological systems is forced by unpredictable and intermittent hydroclimatic fluctuations at different time scales. While modeling and long-term prediction of these complex interactions often requires a probabilistic approach, the resulting stochastic equations however are only solvable in special cases. To obtain information on the behavior of the system one typically has to resort to approximation methods. Here we discuss macroscopic equations for the averages and fluctuation-dissipation estimates for the general correlations between the forcing and the ecohydrological response for the soil moisture-plant biomass interaction and the problem of primary salinization and nitrogen retention in soils.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1984-01-01
Generally, fast direct solvers are not directly applicable to a nonseparable elliptic partial differential equation. This limitation, however, is circumvented by a semi-direct procedure, i.e., an iterative procedure using fast direct solvers. An efficient semi-direct procedure which is easy to implement and applicable to a variety of boundary conditions is presented. The current procedure also possesses other highly desirable properties, i.e.: (1) the convergence rate does not decrease with an increase of grid cell aspect ratio, and (2) the convergence rate is estimated using the coefficients of the partial differential equation being solved.
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Emerson, Douglas G.; Vecchia, Aldo V.; Dahl, Ann L.
2005-01-01
The drainage-area ratio method commonly is used to estimate streamflow for sites where no streamflow data were collected. To evaluate the validity of the drainage-area ratio method and to determine if an improved method could be developed to estimate streamflow, a multiple-regression technique was used to determine if drainage area, main channel slope, and precipitation were significant variables for estimating streamflow in the Red River of the North Basin. A separate regression analysis was performed for streamflow for each of three seasons-- winter, spring, and summer. Drainage area and summer precipitation were the most significant variables. However, the regression equations generally overestimated streamflows for North Dakota stations and underestimated streamflows for Minnesota stations. To correct the bias in the residuals for the two groups of stations, indicator variables were included to allow both the intercept and the coefficient for the logarithm of drainage area to depend on the group. Drainage area was the only significant variable in the revised regression equations. The exponents for the drainage-area ratio were 0.85 for the winter season, 0.91 for the spring season, and 1.02 for the summer season.
Causal mediation analysis with a latent mediator.
Albert, Jeffrey M; Geng, Cuiyu; Nelson, Suchitra
2016-05-01
Health researchers are often interested in assessing the direct effect of a treatment or exposure on an outcome variable, as well as its indirect (or mediation) effect through an intermediate variable (or mediator). For an outcome following a nonlinear model, the mediation formula may be used to estimate causally interpretable mediation effects. This method, like others, assumes that the mediator is observed. However, as is common in structural equations modeling, we may wish to consider a latent (unobserved) mediator. We follow a potential outcomes framework and assume a generalized structural equations model (GSEM). We provide maximum-likelihood estimation of GSEM parameters using an approximate Monte Carlo EM algorithm, coupled with a mediation formula approach to estimate natural direct and indirect effects. The method relies on an untestable sequential ignorability assumption; we assess robustness to this assumption by adapting a recently proposed method for sensitivity analysis. Simulation studies show good properties of the proposed estimators in plausible scenarios. Our method is applied to a study of the effect of mother education on occurrence of adolescent dental caries, in which we examine possible mediation through latent oral health behavior. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Belaineh, Getachew; Sumner, David; Carter, Edward; Clapp, David
2013-01-01
Potential evapotranspiration (PET) and reference evapotranspiration (RET) data are usually critical components of hydrologic analysis. Many different equations are available to estimate PET and RET. Most of these equations, such as the Priestley-Taylor and Penman- Monteith methods, rely on detailed meteorological data collected at ground-based weather stations. Few weather stations collect enough data to estimate PET or RET using one of the more complex evapotranspiration equations. Currently, satellite data integrated with ground meteorological data are used with one of these evapotranspiration equations to accurately estimate PET and RET. However, earlier than the last few decades, historical reconstructions of PET and RET needed for many hydrologic analyses are limited by the paucity of satellite data and of some types of ground data. Air temperature stands out as the most generally available meteorological ground data type over the last century. Temperature-based approaches used with readily available historical temperature data offer the potential for long period-of-record PET and RET historical reconstructions. A challenge is the inconsistency between the more accurate, but more data intensive, methods appropriate for more recent periods and the less accurate, but less data intensive, methods appropriate to the more distant past. In this study, multiple methods are harmonized in a seamless reconstruction of historical PET and RET by quantifying and eliminating the biases of the simple Hargreaves-Samani method relative to the more complex and accurate Priestley-Taylor and Penman-Monteith methods. This harmonization process is used to generate long-term, internally consistent, spatiotemporal databases of PET and RET.
Do group-specific equations provide the best estimates of stature?
Albanese, John; Osley, Stephanie E; Tuck, Andrew
2016-04-01
An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Liao, Liang
2013-01-01
As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.
Genetic network inference as a series of discrimination tasks.
Kimura, Shuhei; Nakayama, Satoshi; Hatakeyama, Mariko
2009-04-01
Genetic network inference methods based on sets of differential equations generally require a great deal of time, as the equations must be solved many times. To reduce the computational cost, researchers have proposed other methods for inferring genetic networks by solving sets of differential equations only a few times, or even without solving them at all. When we try to obtain reasonable network models using these methods, however, we must estimate the time derivatives of the gene expression levels with great precision. In this study, we propose a new method to overcome the drawbacks of inference methods based on sets of differential equations. Our method infers genetic networks by obtaining classifiers capable of predicting the signs of the derivatives of the gene expression levels. For this purpose, we defined a genetic network inference problem as a series of discrimination tasks, then solved the defined series of discrimination tasks with a linear programming machine. Our experimental results demonstrated that the proposed method is capable of correctly inferring genetic networks, and doing so more than 500 times faster than the other inference methods based on sets of differential equations. Next, we applied our method to actual expression data of the bacterial SOS DNA repair system. And finally, we demonstrated that our approach relates to the inference method based on the S-system model. Though our method provides no estimation of the kinetic parameters, it should be useful for researchers interested only in the network structure of a target system. Supplementary data are available at Bioinformatics online.
Models for nearly every occasion: Part III - One box decreasing emission models.
Hewett, Paul; Ganser, Gary H
2017-11-01
New one box "well-mixed room" decreasing emission (DE) models are introduced that allow for local exhaust or local exhaust with filtered return, as well the recirculation of a filtered (or cleaned) portion of the general room ventilation. For each control device scenario, a steady state and transient model is presented. The transient equations predict the concentration at any time t after the application of a known mass of a volatile substance to a surface, and can be used to predict the task exposure profile, the average task exposure, as well as peak and short-term exposures. The steady state equations can be used to predict the "average concentration per application" that is reached whenever the substance is repeatedly applied. Whenever the beginning and end concentrations are expected to be zero (or near zero) the steady state equations can also be used to predict the average concentration for a single task with multiple applications during the task, or even a series of such tasks. The transient equations should be used whenever these criteria cannot be met. A structured calibration procedure is proposed that utilizes a mass balance approach. Depending upon the DE model selected, one or more calibration measurements are collected. Using rearranged versions of the steady state equations, estimates of the model variables-e.g., the mass of the substance applied during each application, local exhaust capture efficiency, and the various cleaning or filtration efficiencies-can be calculated. A new procedure is proposed for estimating the emission rate constant.
Carlsohn, Anja; Scharhag-Rosenberger, Friederike; Cassel, Michael; Mayer, Frank
2011-01-01
Athletes may differ in their resting metabolic rate (RMR) from the general population. However, to estimate the RMR in athletes, prediction equations that have not been validated in athletes are often used. The purpose of this study was therefore to verify the applicability of commonly used RMR predictions for use in athletes. The RMR was measured by indirect calorimetry in 17 highly trained rowers and canoeists of the German national teams (BMI 24 ± 2 kg/m(2), fat-free mass 69 ± 15 kg). In addition, the RMR was predicted using Cunningham (CUN) and Harris-Benedict (HB) equations. A two-way repeated measures ANOVA was calculated to test for differences between predicted and measured RMR (α = 0.05). The root mean square percentage error (RMSPE) was calculated and the Bland-Altman procedure was used to quantify the bias for each prediction. Prediction equations significantly underestimated the RMR in males (p < 0.001). The RMSPE was calculated to be 18.4% (CUN) and 20.9% (HB) in the entire group. The bias was 133 kcal/24 h for CUN and 202 kcal/24 h for HB. Predictions significantly underestimate the RMR in male heavyweight endurance athletes but not in females. In athletes with a high fat-free mass, prediction equations might therefore not be applicable to estimate energy requirements. Instead, measurement of the resting energy expenditure or specific prediction equations might be needed for the individual heavyweight athlete. Copyright © 2011 S. Karger AG, Basel.
Feaster, Toby D.; Tasker, Gary D.
2002-01-01
Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique relation between flow and basin characteristics for an individual watershed. This, then, can be used to estimate flows at ungaged sites. Because the computations required for this method are somewhat complex, a computer application was developed that performs the computations and compares the predictive errors for this method. The computer application includes the option of using the region-of-influence method, or the generalized least squares regression equations from this report to compute estimated flows and errors of prediction specific to each ungaged site. From a comparison of predictive errors using the region-of-influence method with those computed using the regional regression method, the region-of-influence method performed systematically better only in the Blue Ridge and is, therefore, not recommended for use in the other physiographic provinces. Peak-flow data for the South Carolina stations used in the regionalization study are provided in appendix A, which contains gaging station information, log-Pearson Type III statistics, information on stage-flow relations, and water-year peak stages and flows. For informational purposes, water-year peak-flow data for stations on regulated streams in South Carolina also are provided in appendix D. Other information pertaining to the regulated streams is provided in the text of the report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danielson, Thomas; Hin, Celine; Savara, Aditya
Lattice based kinetic Monte Carlo (KMC) simulations have been used to determine a functional form for the second order adsorption isotherms on two commonly investigated crystal surfaces: the (111) fluorite surface and the (100) perovskite surface which has the same geometric symmetry as the NaCl (100) surface. The functional form is generalized to be applicable to all values of the equilibrium constant by a shift along the pressure axis. Functions have been determined for estimating the pressure at which a desired coverage would be achieved and for estimating the coverage at a certain pressure. The generalized form has been calculatedmore » by investigating the surface adsorbate coverage across a range of thermodynamic equilibrium constants that span the range 10-26 to 1013. Finally, the equations have been shown to be general for any value of the adsorption equilibrium constant.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danielson, Thomas; Hin, Celine; Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061
Lattice based kinetic Monte Carlo simulations have been used to determine a functional form for the second order adsorption isotherms on two commonly investigated crystal surfaces: the (111) fluorite surface and the (100) perovskite surface which has the same geometric symmetry as the NaCl (100) surface. The functional form is generalized to be applicable to all values of the equilibrium constant by a shift along the pressure axis. Functions have been determined for estimating the pressure at which a desired coverage would be achieved and, conversely, for estimating the coverage at a certain pressure. The generalized form has been calculatedmore » by investigating the surface adsorbate coverage across a range of thermodynamic equilibrium constants that span the range 10{sup −26} to 10{sup 13}. The equations have been shown to be general for any value of the adsorption equilibrium constant.« less
Danielson, Thomas; Hin, Celine; Savara, Aditya
2016-08-10
Lattice based kinetic Monte Carlo (KMC) simulations have been used to determine a functional form for the second order adsorption isotherms on two commonly investigated crystal surfaces: the (111) fluorite surface and the (100) perovskite surface which has the same geometric symmetry as the NaCl (100) surface. The functional form is generalized to be applicable to all values of the equilibrium constant by a shift along the pressure axis. Functions have been determined for estimating the pressure at which a desired coverage would be achieved and for estimating the coverage at a certain pressure. The generalized form has been calculatedmore » by investigating the surface adsorbate coverage across a range of thermodynamic equilibrium constants that span the range 10-26 to 1013. Finally, the equations have been shown to be general for any value of the adsorption equilibrium constant.« less
ERIC Educational Resources Information Center
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
Fractional Order Two-Temperature Dual-Phase-Lag Thermoelasticity with Variable Thermal Conductivity
Mallik, Sadek Hossain; Kanoria, M.
2014-01-01
A new theory of two-temperature generalized thermoelasticity is constructed in the context of a new consideration of dual-phase-lag heat conduction with fractional orders. The theory is then adopted to study thermoelastic interaction in an isotropic homogenous semi-infinite generalized thermoelastic solids with variable thermal conductivity whose boundary is subjected to thermal and mechanical loading. The basic equations of the problem have been written in the form of a vector-matrix differential equation in the Laplace transform domain, which is then solved by using a state space approach. The inversion of Laplace transforms is computed numerically using the method of Fourier series expansion technique. The numerical estimates of the quantities of physical interest are obtained and depicted graphically. Some comparisons of the thermophysical quantities are shown in figures to study the effects of the variable thermal conductivity, temperature discrepancy, and the fractional order parameter. PMID:27419210
Zajac, M
1977-01-01
General, k, and specific, k1 and k2, first-order rate constants for the parallel reaction of hydrolysis catalized by H+ ions were estimated for sulfadiazine (I), sulfamerazine (II), sulfadimidine (III), sulfaperine (IV) and sulfamethoxydiazine (V), hydrolyzed in 1 mole/dm3 HCl at 333, 343, 355 and 363 K. General first-order rate constants for the spontaneous hydrolysis of I--V in borate buffer pH 9.20 at 403, 411 and 418 K were also determined. Thermodynamic parameters of the reaction (delta Ha, deltaH not equal to, deltaS not equal to, deltaG not equal to and log A) were calculated. The effect of substituents in positions 4, 5 and 6 of the pyrimidine ring on the rate of hydrolysis was interpreted in terms of Hammett equation.
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Blanchard, B. J.
1981-01-01
The antecedent precipitation index (API) is a useful indicator of soil moisture conditions for watershed runoff calculations and recent attempts to correlate this index with spaceborne microwave observations have been fairly successful. It is shown that the prognostic equation for soil moisture used in some of the atmospheric general circulation models together with Thornthwaite-Mather parameterization of actual evapotranspiration leads to API equations. The recession coefficient for API is found to depend on climatic factors through potential evapotranspiration and on soil texture through the field capacity and the permanent wilting point. Climatologial data for Wisconsin together with a recently developed model for global isolation are used to simulate the annual trend of the recession coefficient. Good quantitative agreement is shown with the observed trend at Fennimore and Colby watersheds in Wisconsin. It is suggested that API could be a unifying vocabulary for watershed and atmospheric general circulation modelars.
Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients
Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong
2015-01-01
♦ Objectives: To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Methods: Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ Results: The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Conclusions: Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. PMID:26293839
Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients.
Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong
2015-12-01
♦ To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. Copyright © 2015 International Society for Peritoneal Dialysis.
Simple effective rule to estimate the jamming packing fraction of polydisperse hard spheres.
Santos, Andrés; Yuste, Santos B; López de Haro, Mariano; Odriozola, Gerardo; Ogarko, Vitaliy
2014-04-01
A recent proposal in which the equation of state of a polydisperse hard-sphere mixture is mapped onto that of the one-component fluid is extrapolated beyond the freezing point to estimate the jamming packing fraction ϕJ of the polydisperse system as a simple function of M1M3/M22, where Mk is the kth moment of the size distribution. An analysis of experimental and simulation data of ϕJ for a large number of different mixtures shows a remarkable general agreement with the theoretical estimate. To give extra support to the procedure, simulation data for seventeen mixtures in the high-density region are used to infer the equation of state of the pure hard-sphere system in the metastable region. An excellent collapse of the inferred curves up to the glass transition and a significant narrowing of the different out-of-equilibrium glass branches all the way to jamming are observed. Thus, the present approach provides an extremely simple criterion to unify in a common framework and to give coherence to data coming from very different polydisperse hard-sphere mixtures.
Sample size determination for GEE analyses of stepped wedge cluster randomized trials.
Li, Fan; Turner, Elizabeth L; Preisser, John S
2018-06-19
In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator. © 2018, The International Biometric Society.
Chu, Khim Hoong
2017-11-09
Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6 cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.
NASA Astrophysics Data System (ADS)
Mahmood, H.; Siddique, M. R. H.; Akhter, M.
2016-08-01
Estimations of biomass, volume and carbon stock are important in the decision making process for the sustainable management of a forest. These estimations can be conducted by using available allometric equations of biomass and volume. Present study aims to: i. develop a compilation with verified allometric equations of biomass, volume, and carbon for trees and shrubs of Bangladesh, ii. find out the gaps and scope for further development of allometric equations for different trees and shrubs of Bangladesh. Key stakeholders (government departments, research organizations, academic institutions, and potential individual researchers) were identified considering their involvement in use and development of allometric equations. A list of documents containing allometric equations was prepared from secondary sources. The documents were collected, examined, and sorted to avoid repetition, yielding 50 documents. These equations were tested through a quality control scheme involving operational verification, conceptual verification, applicability, and statistical credibility. A total of 517 allometric equations for 80 species of trees, shrubs, palm, and bamboo were recorded. In addition, 222 allometric equations for 39 species were validated through the quality control scheme. Among the verified equations, 20%, 12% and 62% of equations were for green-biomass, oven-dried biomass, and volume respectively and 4 tree species contributed 37% of the total verified equations. Five gaps have been pinpointed for the existing allometric equations of Bangladesh: a. little work on allometric equation of common tree and shrub species, b. most of the works were concentrated on certain species, c. very little proportion of allometric equations for biomass estimation, d. no allometric equation for belowground biomass and carbon estimation, and d. lower proportion of valid allometric equations. It is recommended that site and species specific allometric equations should be developed and consistency in field sampling, sample processing, data recording and selection of allometric equations should be maintained to ensure accuracy in estimation of biomass, volume, and carbon stock in different forest types of Bangladesh.
NASA Astrophysics Data System (ADS)
Jaenisch, Holger; Handley, James
2013-06-01
We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.
Technique for estimating depth of floods in Tennessee
Gamble, C.R.
1983-01-01
Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)
Observational constraints on cosmological models with Chaplygin gas and quadratic equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharov, G.S., E-mail: german.sharov@mail.ru
Observational manifestations of accelerated expansion of the universe, in particular, recent data for Type Ia supernovae, baryon acoustic oscillations, for the Hubble parameter H ( z ) and cosmic microwave background constraints are described with different cosmological models. We compare the ΛCDM, the models with generalized and modified Chaplygin gas and the model with quadratic equation of state. For these models we estimate optimal model parameters and their permissible errors with different approaches to calculation of sound horizon scale r {sub s} ( z {sub d} ). Among the considered models the best value of χ{sup 2} is achieved formore » the model with quadratic equation of state, but it has 2 additional parameters in comparison with the ΛCDM and therefore is not favored by the Akaike information criterion.« less
Gries, Katharine S; Regier, Dean A; Ramsey, Scott D; Patrick, Donald L
2017-06-01
To develop a statistical model generating utility estimates for prostate cancer specific health states, using preference weights derived from the perspectives of prostate cancer patients, men at risk for prostate cancer, and society. Utility estimate values were calculated using standard gamble (SG) methodology. Study participants valued 18 prostate-specific health states with the five attributes: sexual function, urinary function, bowel function, pain, and emotional well-being. Appropriateness of model (linear regression, mixed effects, or generalized estimating equation) to generate prostate cancer utility estimates was determined by paired t-tests to compare observed and predicted values. Mixed-corrected standard SG utility estimates to account for loss aversion were calculated based on prospect theory. 132 study participants assigned values to the health states (n = 40 men at risk for prostate cancer; n = 43 men with prostate cancer; n = 49 general population). In total, 792 valuations were elicited (six health states for each 132 participants). The most appropriate model for the classification system was a mixed effects model; correlations between the mean observed and predicted utility estimates were greater than 0.80 for each perspective. Developing a health-state classification system with preference weights for three different perspectives demonstrates the relative importance of main effects between populations. The predicted values for men with prostate cancer support the hypothesis that patients experiencing the disease state assign higher utility estimates to health states and there is a difference in valuations made by patients and the general population.
p-Euler equations and p-Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Li, Lei; Liu, Jian-Guo
2018-04-01
We propose in this work new systems of equations which we call p-Euler equations and p-Navier-Stokes equations. p-Euler equations are derived as the Euler-Lagrange equations for the action represented by the Benamou-Brenier characterization of Wasserstein-p distances, with incompressibility constraint. p-Euler equations have similar structures with the usual Euler equations but the 'momentum' is the signed (p - 1)-th power of the velocity. In the 2D case, the p-Euler equations have streamfunction-vorticity formulation, where the vorticity is given by the p-Laplacian of the streamfunction. By adding diffusion presented by γ-Laplacian of the velocity, we obtain what we call p-Navier-Stokes equations. If γ = p, the a priori energy estimates for the velocity and momentum have dual symmetries. Using these energy estimates and a time-shift estimate, we show the global existence of weak solutions for the p-Navier-Stokes equations in Rd for γ = p and p ≥ d ≥ 2 through a compactness criterion.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
Pedan, Alex; Wu, Hongsheng
2011-04-01
This article examines the impact of direct-to-physician, direct-to-consumer, and other marketing activities by pharmaceutical companies on a mature drug category which is in the later stage of its life cycle and in which generics have accrued a significant market share. The main objective of this article is to quantitatively estimate the impact of pharmaceutical promotions on physician prescribing behavior for three different statin brands, after controlling for factors such as patient, physician and physician practice characteristics, generic pressure, et cetera. Using unique panel data of physicians, combined with patient pharmacy prescription records, the authors developed a physician level generalized linear regression model. The generalized estimating equations method was used to account for within physician serial correlations and estimate physician population averaged effects. The findings reveal that even though on average the marketing efforts affect the brand share positively, the magnitude of the effects is very brand specific. Generally, each statin brand has its own trend and because of this, the best choice of predictors for one brand could be suboptimal for another.
Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng
2017-05-30
Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.
Coupland, Carol
2015-01-01
Study question Is it possible to develop and externally validate risk prediction equations to estimate the 10 year risk of blindness and lower limb amputation in patients with diabetes aged 25-84 years? Methods This was a prospective cohort study using routinely collected data from general practices in England contributing to the QResearch and Clinical Practice Research Datalink (CPRD) databases during the study period 1998-2014. The equations were developed using 763 QResearch practices (n=454 575 patients with diabetes) and validated in 254 different QResearch practices (n=142 419) and 357 CPRD practices (n=206 050). Cox proportional hazards models were used to derive separate risk equations for blindness and amputation in men and women that could be evaluated at 10 years. Measures of calibration and discrimination were calculated in the two validation cohorts. Study answer and limitations Risk prediction equations to quantify absolute risk of blindness and amputation in men and women with diabetes have been developed and externally validated. In the QResearch derivation cohort, 4822 new cases of lower limb amputation and 8063 new cases of blindness occurred during follow-up. The risk equations were well calibrated in both validation cohorts. Discrimination was good in men in the external CPRD cohort for amputation (D statistic 1.69, Harrell’s C statistic 0.77) and blindness (D statistic 1.40, Harrell’s C statistic 0.73), with similar results in women and in the QResearch validation cohort. The algorithms are based on variables that patients are likely to know or that are routinely recorded in general practice computer systems. They can be used to identify patients at high risk for prevention or further assessment. Limitations include lack of formally adjudicated outcomes, information bias, and missing data. What this study adds Patients with type 1 or type 2 diabetes are at increased risk of blindness and amputation but generally do not have accurate assessments of the magnitude of their individual risks. The new algorithms calculate the absolute risk of developing these complications over a 10 year period in patients with diabetes, taking account of their individual risk factors. Funding, competing interests, data sharing JH-C is co-director of QResearch, a not for profit organisation which is a joint partnership between the University of Nottingham and Egton Medical Information Systems, and is also a paid director of ClinRisk Ltd. CC is a paid consultant statistician for ClinRisk Ltd. PMID:26560308
Hippisley-Cox, Julia; Coupland, Carol
2015-11-11
Is it possible to develop and externally validate risk prediction equations to estimate the 10 year risk of blindness and lower limb amputation in patients with diabetes aged 25-84 years? This was a prospective cohort study using routinely collected data from general practices in England contributing to the QResearch and Clinical Practice Research Datalink (CPRD) databases during the study period 1998-2014. The equations were developed using 763 QResearch practices (n=454,575 patients with diabetes) and validated in 254 different QResearch practices (n=142,419) and 357 CPRD practices (n=206,050). Cox proportional hazards models were used to derive separate risk equations for blindness and amputation in men and women that could be evaluated at 10 years. Measures of calibration and discrimination were calculated in the two validation cohorts. Risk prediction equations to quantify absolute risk of blindness and amputation in men and women with diabetes have been developed and externally validated. In the QResearch derivation cohort, 4822 new cases of lower limb amputation and 8063 new cases of blindness occurred during follow-up. The risk equations were well calibrated in both validation cohorts. Discrimination was good in men in the external CPRD cohort for amputation (D statistic 1.69, Harrell's C statistic 0.77) and blindness (D statistic 1.40, Harrell's C statistic 0.73), with similar results in women and in the QResearch validation cohort. The algorithms are based on variables that patients are likely to know or that are routinely recorded in general practice computer systems. They can be used to identify patients at high risk for prevention or further assessment. Limitations include lack of formally adjudicated outcomes, information bias, and missing data. Patients with type 1 or type 2 diabetes are at increased risk of blindness and amputation but generally do not have accurate assessments of the magnitude of their individual risks. The new algorithms calculate the absolute risk of developing these complications over a 10 year period in patients with diabetes, taking account of their individual risk factors. JH-C is co-director of QResearch, a not for profit organisation which is a joint partnership between the University of Nottingham and Egton Medical Information Systems, and is also a paid director of ClinRisk Ltd. CC is a paid consultant statistician for ClinRisk Ltd. © Hippisley-Cox et al 2015.
F. N. Scatena
1990-01-01
This paper describe the hydraulics of unsubmerged flow for 5 culverts in the Luiquillo Esperimental Forest of Puerto Rico. A General equation based on empirical data is presented to estimate culvert discharge during unsubmerged conditions. Large culverts are needed in humid tropical montane areas than in humid temperatute watersheds and are usually appropriate only...
Model verification of large structural systems
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1977-01-01
A methodology was formulated, and a general computer code implemented for processing sinusoidal vibration test data to simultaneously make adjustments to a prior mathematical model of a large structural system, and resolve measured response data to obtain a set of orthogonal modes representative of the test model. The derivation of estimator equations is shown along with example problems. A method for improving the prior analytic model is included.
ERIC Educational Resources Information Center
Huang, Li-Ling; Thrasher, James F.; Abad, Erika Nayeli; Cummings, K. Michael; Bansal-Travers, Maansi; Brown, Abraham; Nagelhout, Gera E.
2015-01-01
Objective: Evaluate the second flight of the U.S. "Tips From Former Smokers" (Tips) campaign. Method: Data were analyzed from an online consumer panel of U.S. adult smokers before (n = 1,404) and after (n = 1,401) the 2013 Tips campaign launch. Generalized estimating equation models assessed whether the Tips advertisement recall was…
Estimating population salt intake in India using spot urine samples.
Petersen, Kristina S; Johnson, Claire; Mohan, Sailesh; Rogers, Kris; Shivashankar, Roopa; Thout, Sudhir Raj; Gupta, Priti; He, Feng J; MacGregor, Graham A; Webster, Jacqui; Santos, Joseph Alvin; Krishnan, Anand; Maulik, Pallab K; Reddy, K Srinath; Gupta, Ruby; Prabhakaran, Dorairaj; Neal, Bruce
2017-11-01
To compare estimates of mean population salt intake in North and South India derived from spot urine samples versus 24-h urine collections. In a cross-sectional survey, participants were sampled from slum, urban and rural communities in North and in South India. Participants provided 24-h urine collections, and random morning spot urine samples. Salt intake was estimated from the spot urine samples using a series of established estimating equations. Salt intake data from the 24-h urine collections and spot urine equations were weighted to provide estimates of salt intake for Delhi and Haryana, and Andhra Pradesh. A total of 957 individuals provided a complete 24-h urine collection and a spot urine sample. Weighted mean salt intake based on the 24-h urine collection, was 8.59 (95% confidence interval 7.73-9.45) and 9.46 g/day (8.95-9.96) in Delhi and Haryana, and Andhra Pradesh, respectively. Corresponding estimates based on the Tanaka equation [9.04 (8.63-9.45) and 9.79 g/day (9.62-9.96) for Delhi and Haryana, and Andhra Pradesh, respectively], the Mage equation [8.80 (7.67-9.94) and 10.19 g/day (95% CI 9.59-10.79)], the INTERSALT equation [7.99 (7.61-8.37) and 8.64 g/day (8.04-9.23)] and the INTERSALT equation with potassium [8.13 (7.74-8.52) and 8.81 g/day (8.16-9.46)] were all within 1 g/day of the estimate based upon 24-h collections. For the Toft equation, estimates were 1-2 g/day higher [9.94 (9.24-10.64) and 10.69 g/day (9.44-11.93)] and for the Kawasaki equation they were 3-4 g/day higher [12.14 (11.30-12.97) and 13.64 g/day (13.15-14.12)]. In urban and rural areas in North and South India, most spot urine-based equations provided reasonable estimates of mean population salt intake. Equations that did not provide good estimates may have failed because specimen collection was not aligned with the original method.
Assessment of a 3-D boundary layer code to predict heat transfer and flow losses in a turbine
NASA Technical Reports Server (NTRS)
Anderson, O. L.
1984-01-01
Zonal concepts are utilized to delineate regions of application of three-dimensional boundary layer (DBL) theory. The zonal approach requires three distinct analyses. A modified version of the 3-DBL code named TABLET is used to analyze the boundary layer flow. This modified code solves the finite difference form of the compressible 3-DBL equations in a nonorthogonal surface coordinate system which includes coriolis forces produced by coordinate rotation. These equations are solved using an efficient, implicit, fully coupled finite difference procedure. The nonorthogonal surface coordinate system is calculated using a general analysis based on the transfinite mapping of Gordon which is valid for any arbitrary surface. Experimental data is used to determine the boundary layer edge conditions. The boundary layer edge conditions are determined by integrating the boundary layer edge equations, which are the Euler equations at the edge of the boundary layer, using the known experimental wall pressure distribution. Starting solutions along the inflow boundaries are estimated by solving the appropriate limiting form of the 3-DBL equations.
An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.
1999-01-01
A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.
Generalization of Einstein's gravitational field equations
NASA Astrophysics Data System (ADS)
Moulin, Frédéric
2017-12-01
The Riemann tensor is the cornerstone of general relativity, but as is well known it does not appear explicitly in Einstein's equation of gravitation. This suggests that the latter may not be the most general equation. We propose here for the first time, following a rigorous mathematical treatment based on the variational principle, that there exists a generalized 4-index gravitational field equation containing the Riemann curvature tensor linearly, and thus the Weyl tensor as well. We show that this equation, written in n dimensions, contains the energy-momentum tensor for matter and that of the gravitational field itself. This new 4-index equation remains completely within the framework of general relativity and emerges as a natural generalization of the familiar 2-index Einstein equation. Due to the presence of the Weyl tensor, we show that this equation contains much more information, which fully justifies the use of a fourth-order theory.
Ye, Yu; Kerr, William C
2011-01-01
To explore various model specifications in estimating relationships between liver cirrhosis mortality rates and per capita alcohol consumption in aggregate-level cross-section time-series data. Using a series of liver cirrhosis mortality rates from 1950 to 2002 for 47 U.S. states, the effects of alcohol consumption were estimated from pooled autoregressive integrated moving average (ARIMA) models and 4 types of panel data models: generalized estimating equation, generalized least square, fixed effect, and multilevel models. Various specifications of error term structure under each type of model were also examined. Different approaches controlling for time trends and for using concurrent or accumulated consumption as predictors were also evaluated. When cirrhosis mortality was predicted by total alcohol, highly consistent estimates were found between ARIMA and panel data analyses, with an average overall effect of 0.07 to 0.09. Less consistent estimates were derived using spirits, beer, and wine consumption as predictors. When multiple geographic time series are combined as panel data, none of existent models could accommodate all sources of heterogeneity such that any type of panel model must employ some form of generalization. Different types of panel data models should thus be estimated to examine the robustness of findings. We also suggest cautious interpretation when beverage-specific volumes are used as predictors. Copyright © 2010 by the Research Society on Alcoholism.
Bell, Kristie L; Boyd, Roslyn N; Walker, Jacqueline L; Stevenson, Richard D; Davies, Peter S W
2013-08-01
Body composition assessment is an essential component of nutritional evaluation in children with cerebral palsy. This study aimed to validate bioelectrical impedance to estimate total body water in young children with cerebral palsy and determine best electrode placement in unilateral impairment. 55 young children with cerebral palsy across all functional ability levels were included. Height/length was measured or estimated from knee height. Total body water was estimated using a Bodystat 1500MDD and three equations, and measured using the gold standard, deuterium dilution technique. Comparisons were made using Bland Altman analysis. For children with bilateral impairment, the Fjeld equation estimated total body water with the least bias (limits of agreement): 0.0 L (-1.4 L to 1.5 L); the Pencharz equation produced the greatest: 2.7 L (0.6 L-4.8 L). For children with unilateral impairment, differences between measured and estimated total body water were lowest on the unimpaired side using the Fjeld equation 0.1 L (-1.5 L to 1.6 L)) and greatest for the Pencharz equation. The ability of bioelectrical impedance to estimate total body water depends on the equation chosen. The Fjeld equation was the most accurate for the group, however, individual results varied by up to 18%. A population specific equation was developed and may enhance the accuracy of estimates. Australian New Zealand Clinical Trials Registry (ANZCTR) number: ACTRN12611000616976. Copyright © 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
A new equation of state for better liquid density prediction of natural gas systems
NASA Astrophysics Data System (ADS)
Nwankwo, Princess C.
Equations of state formulations, modifications and applications have remained active research areas since the success of van der Waal's equation in 1873. The need for better reservoir fluid modeling and characterization is of great importance to petroleum engineers who deal with thermodynamic related properties of petroleum fluids at every stage of the petroleum "life span" from its drilling, to production through the wellbore, to transportation, metering and storage. Equations of state methods are far less expensive (in terms of material cost and time) than laboratory or experimental forages and the results are interestingly not too far removed from the limits of acceptable accuracy. In most cases, the degree of accuracy obtained, by using various EOS's, though not appreciable, have been acceptable when considering the gain in time. The possibility of obtaining an equation of state which though simple in form and in use, could have the potential of further narrowing the present existing bias between experimentally determined and popular EOS estimated results spurred the interest that resulted in this study. This research study had as its chief objective, to develop a new equation of state that would more efficiently capture the thermodynamic properties of gas condensate fluids, especially the liquid phase density, which is the major weakness of other established and popular cubic equations of state. The set objective was satisfied by a new semi analytical cubic three parameter equation of state, derived by the modification of the attraction term contribution to pressure of the van der Waal EOS without compromising either structural simplicity or accuracy of estimating other vapor liquid equilibria properties. The application of new EOS to single and multi-component light hydrocarbon fluids recorded far lower error values than does the popular two parameter, Peng-Robinson's (PR) and three parameter Patel-Teja's (PT) equations of state. Furthermore, this research was able to extend the application of the generalized cubic equation of Coats (1985) to three parameter cubic equations of state, a feat, not yet recorded by any author in literature.
Feaster, Toby D.; Gotvald, Anthony J.; Weaver, J. Curtis
2014-01-01
Reliable estimates of the magnitude and frequency of floods are essential for the design of transportation and water-conveyance structures, flood-insurance studies, and flood-plain management. Such estimates are particularly important in densely populated urban areas. In order to increase the number of streamflow-gaging stations (streamgages) available for analysis, expand the geographical coverage that would allow for application of regional regression equations across State boundaries, and build on a previous flood-frequency investigation of rural U.S Geological Survey streamgages in the Southeast United States, a multistate approach was used to update methods for determining the magnitude and frequency of floods in urban and small, rural streams that are not substantially affected by regulation or tidal fluctuations in Georgia, South Carolina, and North Carolina. The at-site flood-frequency analysis of annual peak-flow data for urban and small, rural streams (through September 30, 2011) included 116 urban streamgages and 32 small, rural streamgages, defined in this report as basins draining less than 1 square mile. The regional regression analysis included annual peak-flow data from an additional 338 rural streamgages previously included in U.S. Geological Survey flood-frequency reports and 2 additional rural streamgages in North Carolina that were not included in the previous Southeast rural flood-frequency investigation for a total of 488 streamgages included in the urban and small, rural regression analysis. The at-site flood-frequency analyses for the urban and small, rural streamgages included the expected moments algorithm, which is a modification of the Bulletin 17B log-Pearson type III method for fitting the statistical distribution to the logarithms of the annual peak flows. Where applicable, the flood-frequency analysis also included low-outlier and historic information. Additionally, the application of a generalized Grubbs-Becks test allowed for the detection of multiple potentially influential low outliers. Streamgage basin characteristics were determined using geographical information system techniques. Initial ordinary least squares regression simulations reduced the number of basin characteristics on the basis of such factors as statistical significance, coefficient of determination, Mallow’s Cp statistic, and ease of measurement of the explanatory variable. Application of generalized least squares regression techniques produced final predictive (regression) equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability flows for urban and small, rural ungaged basins for three hydrologic regions (HR1, Piedmont–Ridge and Valley; HR3, Sand Hills; and HR4, Coastal Plain), which previously had been defined from exploratory regression analysis in the Southeast rural flood-frequency investigation. Because of the limited availability of urban streamgages in the Coastal Plain of Georgia, South Carolina, and North Carolina, additional urban streamgages in Florida and New Jersey were used in the regression analysis for this region. Including the urban streamgages in New Jersey allowed for the expansion of the applicability of the predictive equations in the Coastal Plain from 3.5 to 53.5 square miles. Average standard error of prediction for the predictive equations, which is a measure of the average accuracy of the regression equations when predicting flood estimates for ungaged sites, range from 25.0 percent for the 10-percent annual exceedance probability regression equation for the Piedmont–Ridge and Valley region to 73.3 percent for the 0.2-percent annual exceedance probability regression equation for the Sand Hills region.
Nestler, Steffen
2014-05-01
Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.
Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A
2016-03-15
We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.
Stochastic estimates of gradient from laser measurements for an autonomous Martian Roving Vehicle
NASA Technical Reports Server (NTRS)
Shen, C. N.; Burger, P.
1973-01-01
The general problem presented in this paper is one of estimating the state vector x from the state equation h = Ax, where h, A, and x are all stochastic. Specifically, the problem is for an autonomous Martian Roving Vehicle to utilize laser measurements in estimating the gradient of the terrain. Error exists due to two factors - surface roughness and instrumental measurements. The errors in slope depend on the standard deviations of these noise factors. Numerically, the error in gradient is expressed as a function of instrumental inaccuracies. Certain guidelines for the accuracy of permissable gradient must be set. It is found that present technology can meet these guidelines.-
Damping factor estimation using spin wave attenuation in permalloy film
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manago, Takashi, E-mail: manago@fukuoka-u.ac.jp; Yamanoi, Kazuto; Kasai, Shinya
2015-05-07
Damping factor of a Permalloy (Py) thin film is estimated by using the magnetostatic spin wave propagation. The attenuation lengths are obtained by the dependence of the transmission intensity on the antenna distance, and decrease with increasing magnetic fields. The relationship between the attenuation length, damping factor, and external magnetic field is derived theoretically, and the damping factor was determined to be 0.0063 by fitting the magnetic field dependence of the attenuation length, using the derived equation. The obtained value is in good agreement with the general value of Py. Thus, this estimation method of the damping factor using spinmore » waves attenuation can be useful tool for ferromagnetic thin films.« less
Advanced Earth Observation System Instrumentation Study (aeosis)
NASA Technical Reports Server (NTRS)
White, R.; Grant, F.; Malchow, H.; Walker, B.
1975-01-01
Various types of measurements were studied for estimating the orbit and/or attitude of an Earth Observation Satellite. An investigation was made into the use of known ground targets in the earth sensor imagery, in combination with onboard star sightings and/or range and range rate measurements by ground tracking stations or tracking satellites (TDRSS), to estimate satellite attitude, orbital ephemeris, and gyro bias drift. Generalized measurement equations were derived for star measurements with a particular type of star tracker, and for landmark measurements with a multispectral scanner being proposed for an advanced Earth Observation Satellite. The use of infra-red horizon measurements to estimate the attitude and gyro bias drift of a geosynchronous satellite was explored.
Item Response Theory Equating Using Bayesian Informative Priors.
ERIC Educational Resources Information Center
de la Torre, Jimmy; Patz, Richard J.
This paper seeks to extend the application of Markov chain Monte Carlo (MCMC) methods in item response theory (IRT) to include the estimation of equating relationships along with the estimation of test item parameters. A method is proposed that incorporates estimation of the equating relationship in the item calibration phase. Item parameters from…
Nugis, V Yu; Khvostunov, I K; Goloub, E V; Kozlova, M G; Nadejinal, N M; Galstian, I A
2015-01-01
The method for retrospective dose assessment based on the analysis of cell distribution by the number of dicentrics and unstable aberrations using a special computer program was earlier developed based on the data about the persons irradiated as a result of the accident at the Chernobyl nuclear power plant. This method was applied for the same purpose for data processing of repeated cytogenetic studies of the patients exposed to γ-, γ-β- or γ-neutron radiation in various situations. As a whole, this group was followed up in more distant periods (17-50 years) after exposure than Chernobyl patients (up to 25 years). The use for retrospective dose assessment of the multiple regression equations obtained for the Chernobyl cohort showed that the equation, which includes computer recovered estimate of the dose and the time elapsed after irradiation, was generally unsatisfactory (r = 0.069 at p = 0.599). Similar equations with recovered estimate of the dose and frequency of abnormal chromosomes in a distant period or with all three parameters as variables gave better results (r = 0.686 at p = 0.000000001 and r = 0.542 at p = 0.000008, respectively).
Parrett, Charles; Omang, R.J.; Hull, J.A.
1983-01-01
Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)
Computing generalized Langevin equations and generalized Fokker-Planck equations.
Darve, Eric; Solomon, Jose; Kia, Amirali
2009-07-07
The Mori-Zwanzig formalism is an effective tool to derive differential equations describing the evolution of a small number of resolved variables. In this paper we present its application to the derivation of generalized Langevin equations and generalized non-Markovian Fokker-Planck equations. We show how long time scales rates and metastable basins can be extracted from these equations. Numerical algorithms are proposed to discretize these equations. An important aspect is the numerical solution of the orthogonal dynamics equation which is a partial differential equation in a high dimensional space. We propose efficient numerical methods to solve this orthogonal dynamics equation. In addition, we present a projection formalism of the Mori-Zwanzig type that is applicable to discrete maps. Numerical applications are presented from the field of Hamiltonian systems.
Bayesian parameter estimation for nonlinear modelling of biological pathways.
Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang
2011-01-01
The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.
Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.
Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M
2016-03-11
Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.
Empirical improvements for estimating earthquake response spectra with random‐vibration theory
Boore, David; Thompson, Eric M.
2012-01-01
The stochastic method of ground‐motion simulation is often used in combination with the random‐vibration theory to directly compute ground‐motion intensity measures, thereby bypassing the more computationally intensive time‐domain simulations. Key to the application of random‐vibration theory to simulate response spectra is determining the duration (Drms) used in computing the root‐mean‐square oscillator response. Boore and Joyner (1984) originally proposed an equation for Drms , which was improved upon by Liu and Pezeshk (1999). Though these equations are both substantial improvements over using the duration of the ground‐motion excitation for Drms , we document systematic differences between the ground‐motion intensity measures derived from the random‐vibration and time‐domain methods for both of these Drms equations. These differences are generally less than 10% for most magnitudes, distances, and periods of engineering interest. Given the systematic nature of the differences, however, we feel that improved equations are warranted. We empirically derive new equations from time‐domain simulations for eastern and western North America seismological models. The new equations improve the random‐vibration simulations over a wide range of magnitudes, distances, and oscillator periods.
Generalized continued fractions and ergodic theory
NASA Astrophysics Data System (ADS)
Pustyl'nikov, L. D.
2003-02-01
In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest.
Age Estimation of Infants Through Metric Analysis of Developing Anterior Deciduous Teeth.
Viciano, Joan; De Luca, Stefano; Irurita, Javier; Alemán, Inmaculada
2018-01-01
This study provides regression equations for estimation of age of infants from the dimensions of their developing deciduous teeth. The sample comprises 97 individuals of known sex and age (62 boys, 35 girls), aged between 2 days and 1,081 days. The age-estimation equations were obtained for the sexes combined, as well as for each sex separately, thus including "sex" as an independent variable. The values of the correlations and determination coefficients obtained for each regression equation indicate good fits for most of the equations obtained. The "sex" factor was statistically significant when included as an independent variable in seven of the regression equations. However, the "sex" factor provided an advantage for age estimation in only three of the equations, compared to those that did not include "sex" as a factor. These data suggest that the ages of infants can be accurately estimated from measurements of their developing deciduous teeth. © 2017 American Academy of Forensic Sciences.
Investigation of Hill's optical turbulence model by means of direct numerical simulation.
Muschinski, Andreas; de Bruyn Kops, Stephen M
2015-12-01
For almost four decades, Hill's "Model 4" [J. Fluid Mech. 88, 541 (1978) has played a central role in research and technology of optical turbulence. Based on Batchelor's generalized Obukhov-Corrsin theory of scalar turbulence, Hill's model predicts the dimensionless function h(κl(0), Pr) that appears in Tatarskii's well-known equation for the 3D refractive-index spectrum in the case of homogeneous and isotropic turbulence, Φn(κ)=0.033C2(n)κ(-11/3) h(κl(0), Pr). Here we investigate Hill's model by comparing numerical solutions of Hill's differential equation with scalar spectra estimated from direct numerical simulation (DNS) output data. Our DNS solves the Navier-Stokes equation for the 3D velocity field and the transport equation for the scalar field on a numerical grid containing 4096(3) grid points. Two independent DNS runs are analyzed: one with the Prandtl number Pr=0.7 and a second run with Pr=1.0 . We find very good agreement between h(κl(0), Pr) estimated from the DNS output data and h(κl(0), Pr) predicted by the Hill model. We find that the height of the Hill bump is 1.79 Pr(1/3), implying that there is no bump if Pr<0.17 . Both the DNS and the Hill model predict that the viscous-diffusive "tail" of h(κl(0), Pr) is exponential, not Gaussian.
NASA Astrophysics Data System (ADS)
Tangdamrongsub, Natthachet; Han, Shin-Chan; Decker, Mark; Yeo, In-Young; Kim, Hyungjun
2018-03-01
An accurate estimation of soil moisture and groundwater is essential for monitoring the availability of water supply in domestic and agricultural sectors. In order to improve the water storage estimates, previous studies assimilated terrestrial water storage variation (ΔTWS) derived from the Gravity Recovery and Climate Experiment (GRACE) into land surface models (LSMs). However, the GRACE-derived ΔTWS was generally computed from the high-level products (e.g. time-variable gravity fields, i.e. level 2, and land grid from the level 3 product). The gridded data products are subjected to several drawbacks such as signal attenuation and/or distortion caused by a posteriori filters and a lack of error covariance information. The post-processing of GRACE data might lead to the undesired alteration of the signal and its statistical property. This study uses the GRACE least-squares normal equation data to exploit the GRACE information rigorously and negate these limitations. Our approach combines GRACE's least-squares normal equation (obtained from ITSG-Grace2016 product) with the results from the Community Atmosphere Biosphere Land Exchange (CABLE) model to improve soil moisture and groundwater estimates. This study demonstrates, for the first time, an importance of using the GRACE raw data. The GRACE-combined (GC) approach is developed for optimal least-squares combination and the approach is applied to estimate the soil moisture and groundwater over 10 Australian river basins. The results are validated against the satellite soil moisture observation and the in situ groundwater data. Comparing to CABLE, we demonstrate the GC approach delivers evident improvement of water storage estimates, consistently from all basins, yielding better agreement on seasonal and inter-annual timescales. Significant improvement is found in groundwater storage while marginal improvement is observed in surface soil moisture estimates.
General Tricomi-Rassias problem and oblique derivative problem for generalized Chaplygin equations
NASA Astrophysics Data System (ADS)
Wen, Guochun; Chen, Dechang; Cheng, Xiuzhen
2007-09-01
Many authors have discussed the Tricomi problem for some second order equations of mixed type, which has important applications in gas dynamics. In particular, Bers proposed the Tricomi problem for Chaplygin equations in multiply connected domains [L. Bers, Mathematical Aspects of Subsonic and Transonic Gas Dynamics, Wiley, New York, 1958]. And Rassias proposed the exterior Tricomi problem for mixed equations in a doubly connected domain and proved the uniqueness of solutions for the problem [J.M. Rassias, Lecture Notes on Mixed Type Partial Differential Equations, World Scientific, Singapore, 1990]. In the present paper, we discuss the general Tricomi-Rassias problem for generalized Chaplygin equations. This is one general oblique derivative problem that includes the exterior Tricomi problem as a special case. We first give the representation of solutions of the general Tricomi-Rassias problem, and then prove the uniqueness and existence of solutions for the problem by a new method. In this paper, we shall also discuss another general oblique derivative problem for generalized Chaplygin equations.
Conservation laws and symmetries of a generalized Kawahara equation
NASA Astrophysics Data System (ADS)
Gandarias, Maria Luz; Rosa, Maria; Recio, Elena; Anco, Stephen
2017-06-01
The generalized Kawahara equation ut = a(t)uxxxxx + b(t)uxxx + c(t) f (u)ux appears in many physical applications. A complete classification of low-order conservation laws and point symmetries is obtained for this equation, which includes as a special case the usual Kawahara equation ut = αuux + βu2ux + γuxxx + μuxxxxx. A general connection between conservation laws and symmetries for the generalized Kawahara equation is derived through the Hamiltonian structure of this equation and its relationship to Noether's theorem using a potential formulation.
Asquith, William H.; Thompson, David B.
2008-01-01
The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.
NASA Astrophysics Data System (ADS)
Beraldo e Silva, Leandro; de Siqueira Pedra, Walter; Sodré, Laerte; Perico, Eder L. D.; Lima, Marcos
2017-09-01
The collapse of a collisionless self-gravitating system, with the fast achievement of a quasi-stationary state, is driven by violent relaxation, with a typical particle interacting with the time-changing collective potential. It is traditionally assumed that this evolution is governed by the Vlasov-Poisson equation, in which case entropy must be conserved. We run N-body simulations of isolated self-gravitating systems, using three simulation codes, NBODY-6 (direct summation without softening), NBODY-2 (direct summation with softening), and GADGET-2 (tree code with softening), for different numbers of particles and initial conditions. At each snapshot, we estimate the Shannon entropy of the distribution function with three different techniques: Kernel, Nearest Neighbor, and EnBiD. For all simulation codes and estimators, the entropy evolution converges to the same limit as N increases. During violent relaxation, the entropy has a fast increase followed by damping oscillations, indicating that violent relaxation must be described by a kinetic equation other than the Vlasov-Poisson equation, even for N as large as that of astronomical structures. This indicates that violent relaxation cannot be described by a time-reversible equation, shedding some light on the so-called “fundamental paradox of stellar dynamics.” The long-term evolution is well-described by the orbit-averaged Fokker-Planck model, with Coulomb logarithm values in the expected range 10{--}12. By means of NBODY-2, we also study the dependence of the two-body relaxation timescale on the softening length. The approach presented in the current work can potentially provide a general method for testing any kinetic equation intended to describe the macroscopic evolution of N-body systems.
Moon, J R
2013-01-01
The purpose of the current review was to evaluate how body composition can be utilised in athletes, paying particular attention to the bioelectrical impedance analysis (BIA) technique. Various body composition methods are discussed, as well as the unique characteristics of athletes that can lead to large errors when predicting fat mass (FM) and fat-free mass (FFM). Basic principles of BIA are discussed, and past uses of the BIA technique in athletes are explored. Single-prediction validation studies and studies tracking changes in FM and FFM are discussed with applications for athletes. Although extensive research in the area of BIA and athletes has been conducted, there remains a large gap in the literature pertaining to a single generalised athlete equation developed using a multiple-compartment model that includes total body water (TBW). Until a generalised athlete-specific BIA equation developed from a multiple-compartment is published, it is recommended that generalised equations such as those published by Lukaski and Bolonchuk and Lohman be used in athletes. However, BIA equations developed for specific athletes may also produce acceptable values and are still acceptable for use until more research is conducted. The use of a valid BIA equation/device should produce values similar to those of hydrostatic weighing and dual-energy X-ray absorptiometry. However, researchers and practitioners need to understand the individual variability associated with BIA estimations for both single assessments and repeated measurements. Although the BIA method shows promise for estimating body composition in athletes, future research should focus on the development of general athlete-specific equations using a TBW-based three- or four-compartment model.
Flux estimation of the FIFE planetary boundary layer (PBL) with 10.6 micron Doppler lidar
NASA Technical Reports Server (NTRS)
Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn
1990-01-01
A method is devised for calculating wind, momentum, and other flux parameters that characterize the planetary boundary layer (PBL) and thereby facilitate the calibration of spaceborne vs. in situ flux estimates. Single Doppler lidar data are used to estimate the variance of the mean wind and the covariance related to the vertically pointing fluxes of horizontal momentum. The skewness of the vertical velocity and the range of kinetic energy dissipation are also estimated, and the surface heat flux is determined by means of a statistical Navier-Stokes equation. The conclusion shows that the PBL structure combines both 'bottom-up' and 'top-down' processes suggesting that the relevant parameters for the atmospheric boundary layer be revised. The conclusions are of significant interest to the modeling techniques used in General Circulation Models as well as to flux estimation.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Estimating Slash Quantity from Standing Loblolly Pine
Dale D. Wade
1969-01-01
No significant difference were found between variances of two prediction equations for estimating loblolly pine crown weight from diameter breast height (d.b.h). One equation was developed from trees on the Georgia Piedmont and the other from tress on the South Carolina Coastal Plain. An equation and table are presented for estimating loblolly pine slash weights from...
Parameter Estimates in Differential Equation Models for Chemical Kinetics
ERIC Educational Resources Information Center
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
van Deventer, Hendrick E; George, Jaya A; Paiker, Janice E; Becker, Piet J; Katz, Ivor J
2008-07-01
The 4-variable Modification of Diet in Renal Disease (4-v MDRD) and Cockcroft-Gault (CG) equations are commonly used for estimating glomerular filtration rate (GFR); however, neither of these equations has been validated in an indigenous African population. The aim of this study was to evaluate the performance of the 4-v MDRD and CG equations for estimating GFR in black South Africans against measured GFR and to assess the appropriateness for the local population of the ethnicity factor established for African Americans in the 4-v MDRD equation. We enrolled 100 patients in the study. The plasma clearance of chromium-51-EDTA ((51)Cr-EDTA) was used to measure GFR, and serum creatinine was measured using an isotope dilution mass spectrometry (IDMS) traceable assay. We estimated GFR using both the reexpressed 4-v MDRD and CG equations and compared it to measured GFR using 4 modalities: correlation coefficient, weighted Deming regression analysis, percentage bias, and proportion of estimated GFR within 30% of measured GFR (P(30)). The Spearman correlation coefficient between measured and estimated GFR for both equations was similar (4-v MDRD R(2) = 0.80 and CG R(2) = 0.79). Using the 4-v MDRD equation with the ethnicity factor of 1.212 as established for African Americans resulted in a median positive bias of 13.1 (95% CI 5.5 to 18.3) mL/min/1.73 m(2). Without the ethnicity factor, median bias was 1.9 (95% CI -0.8 to 4.5) mL/min/1.73 m(2). The 4-v MDRD equation, without the ethnicity factor of 1.212, can be used for estimating GFR in black South Africans.
Temple, Derry; Denis, Romain; Walsh, Marianne C; Dicker, Patrick; Byrne, Annette T
2015-02-01
To evaluate the accuracy of the most commonly used anthropometric-based equations in the estimation of percentage body fat (%BF) in both normal-weight and overweight women using air-displacement plethysmography (ADP) as the criterion measure. A comparative study in which the equations of Durnin and Womersley (1974; DW) and Jackson, Pollock and Ward (1980) at three, four and seven sites (JPW₃, JPW₄ and JPW₇) were validated against ADP in three groups. Group 1 included all participants, group 2 included participants with a BMI <25·0 kg/m² and group 3 included participants with a BMI ≥25·0 kg/m². Human Performance Laboratory, Institute for Sport and Health, University College Dublin, Republic of Ireland. Forty-three female participants aged between 18 and 55 years. In all three groups, the %BF values estimated from the DW equation were closer to the criterion measure (i.e. ADP) than those estimated from the other equations. Of the three JPW equations, JPW₃ provided the most accurate estimation of %BF when compared with ADP in all three groups. In comparison to ADP, these findings suggest that the DW equation is the most accurate anthropometric method for the estimation of %BF in both normal-weight and overweight females.
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
Risser, Dennis W.; Thompson, Ronald E.; Stuckey, Marla H.
2008-01-01
A method was developed for making estimates of long-term, mean annual ground-water recharge from streamflow data at 80 streamflow-gaging stations in Pennsylvania. The method relates mean annual base-flow yield derived from the streamflow data (as a proxy for recharge) to the climatic, geologic, hydrologic, and physiographic characteristics of the basins (basin characteristics) by use of a regression equation. Base-flow yield is the base flow of a stream divided by the drainage area of the basin, expressed in inches of water basinwide. Mean annual base-flow yield was computed for the period of available streamflow record at continuous streamflow-gaging stations by use of the computer program PART, which separates base flow from direct runoff on the streamflow hydrograph. Base flow provides a reasonable estimate of recharge for basins where streamflow is mostly unaffected by upstream regulation, diversion, or mining. Twenty-eight basin characteristics were included in the exploratory regression analysis as possible predictors of base-flow yield. Basin characteristics found to be statistically significant predictors of mean annual base-flow yield during 1971-2000 at the 95-percent confidence level were (1) mean annual precipitation, (2) average maximum daily temperature, (3) percentage of sand in the soil, (4) percentage of carbonate bedrock in the basin, and (5) stream channel slope. The equation for predicting recharge was developed using ordinary least-squares regression. The standard error of prediction for the equation on log-transformed data was 9.7 percent, and the coefficient of determination was 0.80. The equation can be used to predict long-term, mean annual recharge rates for ungaged basins, providing that the explanatory basin characteristics can be determined and that the underlying assumption is accepted that base-flow yield derived from PART is a reasonable estimate of ground-water recharge rates. For example, application of the equation for 370 hydrologic units in Pennsylvania predicted a range of ground-water recharge from about 6.0 to 22 inches per year. A map of the predicted recharge illustrates the general magnitude and variability of recharge throughout Pennsylvania.
NASA Astrophysics Data System (ADS)
Castellarin, A.; Montanari, A.; Brath, A.
2002-12-01
The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.
Estimating value and volume of ponderosa pine trees by equations.
Martin E. Plank
1981-01-01
Equations for estimating the selling value and tally volume for ponderosa pine lumber from the standing trees are described. Only five characteristics are required for the equations. Development and application of the system are described.
Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme
NASA Astrophysics Data System (ADS)
Hsin, Cheng-Ho; Inigo, Rafael M.
1990-03-01
The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.
Local regularity for time-dependent tug-of-war games with varying probabilities
NASA Astrophysics Data System (ADS)
Parviainen, Mikko; Ruosteenoja, Eero
2016-07-01
We study local regularity properties of value functions of time-dependent tug-of-war games. For games with constant probabilities we get local Lipschitz continuity. For more general games with probabilities depending on space and time we obtain Hölder and Harnack estimates. The games have a connection to the normalized p (x , t)-parabolic equation ut = Δu + (p (x , t) - 2) Δ∞N u.
Generalized Distributed Consensus-based Algorithms for Uncertain Systems and Networks
2010-01-01
time linear systems with markovian jumping parameters and additive disturbances. SIAM Journal on Control and Optimization, 44(4):1165– 1191, 2005... time marko- vian jump linear systems , in the presence of delayed mode observations. Proceed- ings of the 2008 IEEE American Control Conference, pages...Markovian Jump Linear System state estimation . . . . 147 6 Conclusions 152 A Discrete- Time Coupled Matrix Equations 156 A.1 Properties of a special
NASA Astrophysics Data System (ADS)
Deetjen, Thomas A.; Reimers, Andrew S.; Webber, Michael E.
2018-02-01
This study estimates changes in grid-wide, energy consumption caused by load shifting via cooling thermal energy storage (CTES) in the building sector. It develops a general equation for relating generator fleet fuel consumption to building cooling demand as a function of ambient temperature, relative humidity, transmission and distribution current, and baseline power plant efficiency. The results present a graphical sensitivity analysis that can be used to estimate how shifting load from cooling demand to cooling storage could affect overall, grid-wide, energy consumption. In particular, because power plants, air conditioners and transmission systems all have higher efficiencies at cooler ambient temperatures, it is possible to identify operating conditions such that CTES increases system efficiency rather than decreasing it as is typical for conventional storage approaches. A case study of the Dallas-Fort Worth metro area in Texas, USA shows that using CTES to shift daytime cooling load to nighttime cooling storage can reduce annual, system-wide, primary fuel consumption by 17.6 MWh for each MWh of installed CTES capacity. The study concludes that, under the right circumstances, cooling thermal energy storage can reduce grid-wide energy consumption, challenging the perception of energy storage as a net energy consumer.
Chen, Yuh-Min; Ji, Jeng-Yi
2015-09-01
This preliminary study examined the effect of horticultural therapy on psychosocial health in older nursing home residents. A combined quantitative and qualitative design was adopted. Convenience sampling was used to recruit 10 older residents from a nursing home in Taichung, Taiwan. Participants joined a 10-week indoor horticultural program once a week, with each session lasting for about 1.5 hours. A single-group design with multiple measurements was adopted for the quantitative component of this study. Interviews held 1-2 days before the intervention (T0) were used to collect baseline data. The two outcome variables of this study, depression and loneliness, were reassessed during the 5th (T1) and 10th (T2) weeks of the intervention. Generalized estimating equations were used to test the mean differences among T0, T1, and T2 measures. After the 10-week program, qualitative data were collected by asking participants to share their program participation experiences. The results of generalized estimating equation showed significant improvements in depression and loneliness. Four categories emerged from the qualitative data content analysis: social connection, anticipation and hope, sense of achievement, and companionship. Given the beneficial effects of the horticulture therapy, the inclusion of horticultural activities in nursing home activity programs is recommended.
Accounting for Multiple Births in Neonatal and Perinatal Trials: Systematic Review and Case Study
Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A
2010-01-01
Objectives To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births. To explore the sensitivity of an actual trial to several analytic approaches to multiples. Methods A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The NO CLD trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using non-clustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. Results In the systematic review, most studies did not describe the randomization of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (p<0.01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. Conclusions The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. PMID:19969305
Accounting for multiple births in neonatal and perinatal trials: systematic review and case study.
Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A
2010-02-01
To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births and to explore the sensitivity of an actual trial to several analytic approaches to multiples. A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The Nitric Oxide to Prevent Chronic Lung Disease (NO CLD) trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using nonclustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. In the systematic review, most studies did not describe the random assignment of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (P < .01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. Copyright 2010 Mosby, Inc. All rights reserved.
Parent-child agreement regarding children's acute stress: the role of parent acute stress reactions.
Kassam-Adams, Nancy; García-España, J Felipe; Miller, Victoria A; Winston, Flaura
2006-12-01
We examined parent-child agreement regarding child acute stress disorder (ASD) and the relationship between parent ASD symptoms and parent ratings of child ASD. Parent-child dyads (N = 219; child age 8-17 years) were assessed within 1 month of child injury. Parent-child agreement was examined regarding child ASD presence, severity, and specific symptoms. Relationships among parent ASD and parent- and child-reported child ASD were examined using regression analysis and generalized estimating equations (GEE). Parent-child agreement was low for presence of child ASD (kappa = 0.22) and for individual symptoms. Parent and child ratings of child ASD severity were moderately correlated (r = 0.35). Parent ASD was independently associated with parent-rated child ASD, after accounting for child self-rating (beta =.65). Generalized estimating equations indicated that parents with ASD overestimated child ASD and parents without ASD underestimated child ASD, compared to the child's self-rating. Parents' own responses to a potentially traumatic event appear to influence their assessment of child symptoms. Clinicians should obtain child self-report of ASD whenever possible and take parent symptoms into account when interpreting parent reports. Helping parents to assess a child's needs following a potentially traumatic event may be a relevant target for clinical attention.
Universal equation for estimating ideal body weight and body weight at any BMI1
Peterson, Courtney M; Thomas, Diana M; Blackburn, George L; Heymsfield, Steven B
2016-01-01
Background: Ideal body weight (IBW) equations and body mass index (BMI) ranges have both been used to delineate healthy or normal weight ranges, although these 2 different approaches are at odds with each other. In particular, past IBW equations are misaligned with BMI values, and unlike BMI, the equations have failed to recognize that there is a range of ideal or target body weights. Objective: For the first time, to our knowledge, we merged the concepts of a linear IBW equation and of defining target body weights in terms of BMI. Design: With the use of calculus and approximations, we derived an easy-to-use linear equation that clinicians can use to calculate both IBW and body weight at any target BMI value. We measured the empirical accuracy of the equation with the use of NHANES data and performed a comparative analysis with past IBW equations. Results: Our linear equation allowed us to calculate body weights for any BMI and height with a mean empirical accuracy of 0.5–0.7% on the basis of NHANES data. Moreover, we showed that our body weight equation directly aligns with BMI values for both men and women, which avoids the overestimation and underestimation problems at the upper and lower ends of the height spectrum that have plagued past IBW equations. Conclusions: Our linear equation increases the sophistication of IBW equations by replacing them with a single universal equation that calculates both IBW and body weight at any target BMI and height. Therefore, our equation is compatible with BMI and can be applied with the use of mental math or a calculator without the need for an app, which makes it a useful tool for both health practitioners and the general public. PMID:27030535
Universal equation for estimating ideal body weight and body weight at any BMI.
Peterson, Courtney M; Thomas, Diana M; Blackburn, George L; Heymsfield, Steven B
2016-05-01
Ideal body weight (IBW) equations and body mass index (BMI) ranges have both been used to delineate healthy or normal weight ranges, although these 2 different approaches are at odds with each other. In particular, past IBW equations are misaligned with BMI values, and unlike BMI, the equations have failed to recognize that there is a range of ideal or target body weights. For the first time, to our knowledge, we merged the concepts of a linear IBW equation and of defining target body weights in terms of BMI. With the use of calculus and approximations, we derived an easy-to-use linear equation that clinicians can use to calculate both IBW and body weight at any target BMI value. We measured the empirical accuracy of the equation with the use of NHANES data and performed a comparative analysis with past IBW equations. Our linear equation allowed us to calculate body weights for any BMI and height with a mean empirical accuracy of 0.5-0.7% on the basis of NHANES data. Moreover, we showed that our body weight equation directly aligns with BMI values for both men and women, which avoids the overestimation and underestimation problems at the upper and lower ends of the height spectrum that have plagued past IBW equations. Our linear equation increases the sophistication of IBW equations by replacing them with a single universal equation that calculates both IBW and body weight at any target BMI and height. Therefore, our equation is compatible with BMI and can be applied with the use of mental math or a calculator without the need for an app, which makes it a useful tool for both health practitioners and the general public. © 2016 American Society for Nutrition.
Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.
Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van
2017-06-01
In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.
Gurka, Matthew J; Kuperminc, Michelle N; Busby, Marjorie G; Bennis, Jacey A; Grossberg, Richard I; Houlihan, Christine M; Stevenson, Richard D; Henderson, Richard C
2010-02-01
To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I-V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. Slaughter's equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat -9.6/100 [SD 6.2]; 95% confidence interval [CI] -11.0 to -8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI -1.0 to 1.3) than existing equations. A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP.
Norwich, K H
2001-10-01
One can relate the saltiness of a solution of a given substance to the concentration of the solution by means of one of the well-known psychophysical laws. One can also compare the saltiness of solutions of different solutes which have the same concentration, since different substances are intrinsically more salty or less salty. We develop here an equation that relates saltiness both to the concentration of the substance (psychophysical) and to a distinguishing physical property of the salt (intrinsic). For a fixed standard molar entropy of the salt being tasted, the equation simplifies to Fechner's law. When one allows for the intrinsic 'noise' in the chemoreceptor, the equation generalizes to include Stevens's law, with corresponding decrease in the threshold for taste. This threshold reduction exemplifies the principle of stochastic resonance. The theory is validated with reference to experimental data.
Quantification of uncertainty for fluid flow in heterogeneous petroleum reservoirs
NASA Astrophysics Data System (ADS)
Zhang, Dongxiao
Detailed description of the heterogeneity of oil/gas reservoirs is needed to make performance predictions of oil/gas recovery. However, only limited measurements at a few locations are usually available. This combination of significant spatial heterogeneity with incomplete information about it leads to uncertainty about the values of reservoir properties and thus, to uncertainty in estimates of production potential. The theory of stochastic processes provides a natural method for evaluating these uncertainties. In this study, we present a stochastic analysis of transient, single phase flow in heterogeneous reservoirs. We derive general equations governing the statistical moments of flow quantities by perturbation expansions. These moments can be used to construct confidence intervals for the flow quantities (e.g., pressure and flow rate). The moment equations are deterministic and can be solved numerically with existing solvers. The proposed moment equation approach has certain advantages over the commonly used Monte Carlo approach.
Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane
2018-03-01
A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.
Lax representations for matrix short pulse equations
NASA Astrophysics Data System (ADS)
Popowicz, Z.
2017-10-01
The Lax representation for different matrix generalizations of Short Pulse Equations (SPEs) is considered. The four-dimensional Lax representations of four-component Matsuno, Feng, and Dimakis-Müller-Hoissen-Matsuno equations are obtained. The four-component Feng system is defined by generalization of the two-dimensional Lax representation to the four-component case. This system reduces to the original Feng equation, to the two-component Matsuno equation, or to the Yao-Zang equation. The three-component version of the Feng equation is presented. The four-component version of the Matsuno equation with its Lax representation is given. This equation reduces the new two-component Feng system. The two-component Dimakis-Müller-Hoissen-Matsuno equations are generalized to the four-parameter family of the four-component SPE. The bi-Hamiltonian structure of this generalization, for special values of parameters, is defined. This four-component SPE in special cases reduces to the new two-component SPE.
Evolution of the concentration PDF in random environments modeled by global random walk
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter
2013-04-01
The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A.D.
2013-01-01
Soil surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
NASA Astrophysics Data System (ADS)
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A. D.
2013-07-01
surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
Validation of Core Temperature Estimation Algorithm
2016-01-29
plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.
Gradient estimates on the weighted p-Laplace heat equation
NASA Astrophysics Data System (ADS)
Wang, Lin Feng
2018-01-01
In this paper, by a regularization process we derive new gradient estimates for positive solutions to the weighted p-Laplace heat equation when the m-Bakry-Émery curvature is bounded from below by -K for some constant K ≥ 0. When the potential function is constant, which reduce to the gradient estimate established by Ni and Kotschwar for positive solutions to the p-Laplace heat equation on closed manifolds with nonnegative Ricci curvature if K ↘ 0, and reduce to the Davies, Hamilton and Li-Xu's gradient estimates for positive solutions to the heat equation on closed manifolds with Ricci curvature bounded from below if p = 2.
NASA Astrophysics Data System (ADS)
Yamaguchi, Makoto; Midorikawa, Saburoh
The empirical equation for estimating the site amplification factor of ground motion by the average shear-wave velocity of ground (AVS) is examined. In the existing equations, the coefficient on dependence of the amplification factor on the AVS was treated as constant. The analysis showed that the coefficient varies with change of the AVS for short periods. A new estimation equation was proposed considering the dependence on the AVS. The new equation can represent soil characteristics that the softer soil has the longer predominant period, and can make better estimations for short periods than the existing method.
Convergence of Newton's method for a single real equation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Newton's method for finding the zeroes of a single real function is investigated in some detail. Convergence is generally checked using the Contraction Mapping Theorem which yields sufficient but not necessary conditions for convergence of the general single point iteration method. The resulting convergence intervals are frequently considerably smaller than actual convergence zones. For a specific single point iteration method, such as Newton's method, better estimates of regions of convergence should be possible. A technique is described which, under certain conditions (frequently satisfied by well behaved functions) gives much larger zones where convergence is guaranteed.
Development of a winter wheat adjustable crop calendar model
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Ungar, E. E.; Chandiramani, K. L.; Barger, J. E.
1972-01-01
Means for predicting the fluctuating pressures acting on externally blown flap surfaces are developed on the basis of generalizations derived from non-dimensionalized empirical data. Approaches for estimation of the fatigue lives of skin-stringer and honeycomb-core sandwich flap structures are derived from vibration response analyses and panel fatigue data. Approximate expressions for fluctuating pressures, structural response, and fatigue life are combined to reveal the important parametric dependences. The two-dimensional equations of motion of multi-element flap systems are derived in general form, so that they can be specialized readily for any particular system. An introduction is presented of an approach to characterizing the excitation pressures and structural responses which makes use of space-time spectral concepts and promises to provide useful insights, as well as experimental and analytical savings.
Miyawaki, Osato; Omote, Chiaki; Matsuhira, Keiko
2015-12-01
Sol-gel transition of gelatin was analyzed as a multisite stoichiometric reaction of a gelatin molecule with water and solute molecules. The equilibrium sol-gel transition temperature, Tt , was estimated from the average of gelation and melting temperature measured by differential scanning calorimetry. From Tt and the melting enthalpy, ΔHsol , the equilibrium sol-to-gel ratio was estimated by the van't Hoff equation. The reciprocal form of the Wyman-Tanford equation, which describes the sol-to-gel ratio as a function of water activity, was successfully applied to obtain a good linear relationship. From this analysis, the role of water activity on the sol-gel transition of gelatin was clearly explained and the contributions of hydration and solute binding to gelatin molecules were separately discussed in sol-gel transition. The general solution for the free energy for gel-stabilization in various solutions was obtained as a simple function of solute concentration. © 2015 Wiley Periodicals, Inc.
On the dynamics of approximating schemes for dissipative nonlinear equations
NASA Technical Reports Server (NTRS)
Jones, Donald A.
1993-01-01
Since one can rarely write down the analytical solutions to nonlinear dissipative partial differential equations (PDE's), it is important to understand whether, and in what sense, the behavior of approximating schemes to these equations reflects the true dynamics of the original equations. Further, because standard error estimates between approximations of the true solutions coming from spectral methods - finite difference or finite element schemes, for example - and the exact solutions grow exponentially in time, this analysis provides little value in understanding the infinite time behavior of a given approximating scheme. The notion of the global attractor has been useful in quantifying the infinite time behavior of dissipative PDEs, such as the Navier-Stokes equations. Loosely speaking, the global attractor is all that remains of a sufficiently large bounded set in phase space mapped infinitely forward in time under the evolution of the PDE. Though the attractor has been shown to have some nice properties - it is compact, connected, and finite dimensional, for example - it is in general quite complicated. Nevertheless, the global attractor gives a way to understand how the infinite time behavior of approximating schemes such as the ones coming from a finite difference, finite element, or spectral method relates to that of the original PDE. Indeed, one can often show that such approximations also have a global attractor. We therefore only need to understand how the structure of the attractor for the PDE behaves under approximation. This is by no means a trivial task. Several interesting results have been obtained in this direction. However, we will not go into the details. We mention here that approximations generally lose information about the system no matter how accurate they are. There are examples that show certain parts of the attractor may be lost by arbitrary small perturbations of the original equations.
ERIC Educational Resources Information Center
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
NASA Astrophysics Data System (ADS)
Assawaroongruengchot, Monchai
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR-BOC, CVR-EOC and keff-EOC adjustment of a CANDU lattice of which the burnup period is extended from 300 to 450 FPDs. The cases with the central pin containing either Dysprosium or Gadolinium in the natural Uranium are considered in our study. (Abstract shortened by UMI.)
Bardhan, Jaydeep P
2008-10-14
The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.
NASA Astrophysics Data System (ADS)
Chung, Kyung Tae; Lee, Jong Woo
1989-08-01
A connection which is both Einstein and semisymmetric is called an SE connection, and a generalized n-dimensional Riemannian manifold on which the differential geometric structure is imposed by g λμ through an SE connection is called an n-dimensional SE manifold and denoted by SEXn. This paper is a direct continuation of earlier work. In this paper, we derive the generalized fundamental equations for the hypersubmanifold of SEXn, including generalized Gauss formulas, generalized Weingarten equations, and generalized Gauss-Codazzi equations.
Online sequential Monte Carlo smoother for partially observed diffusion processes
NASA Astrophysics Data System (ADS)
Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain
2018-12-01
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
Efficacy of generic allometric equations for estimating biomass: a test in Japanese natural forests.
Ishihara, Masae I; Utsugi, Hajime; Tanouchi, Hiroyuki; Aiba, Masahiro; Kurokawa, Hiroko; Onoda, Yusuke; Nagano, Masahiro; Umehara, Toru; Ando, Makoto; Miyata, Rie; Hiura, Tsutom
2015-07-01
Accurate estimation of tree and forest biomass is key to evaluating forest ecosystem functions and the global carbon cycle. Allometric equations that estimate tree biomass from a set of predictors, such as stem diameter and tree height, are commonly used. Most allometric equations are site specific, usually developed from a small number of trees harvested in a small area, and are either species specific or ignore interspecific differences in allometry. Due to lack of site-specific allometries, local equations are often applied to sites for which they were not originally developed (foreign sites), sometimes leading to large errors in biomass estimates. In this study, we developed generic allometric equations for aboveground biomass and component (stem, branch, leaf, and root) biomass using large, compiled data sets of 1203 harvested trees belonging to 102 species (60 deciduous angiosperm, 32 evergreen angiosperm, and 10 evergreen gymnosperm species) from 70 boreal, temperate, and subtropical natural forests in Japan. The best generic equations provided better biomass estimates than did local equations that were applied to foreign sites. The best generic equations included explanatory variables that represent interspecific differences in allometry in addition to stem diameter, reducing error by 4-12% compared to the generic equations that did not include the interspecific difference. Different explanatory variables were selected for different components. For aboveground and stem biomass, the best generic equations had species-specific wood specific gravity as an explanatory variable. For branch, leaf, and root biomass, the best equations had functional types (deciduous angiosperm, evergreen angiosperm, and evergreen gymnosperm) instead of functional traits (wood specific gravity or leaf mass per area), suggesting importance of other traits in addition to these traits, such as canopy and root architecture. Inclusion of tree height in addition to stem diameter improved the performance of the generic equation only for stem biomass and had no apparent effect on aboveground, branch, leaf, and root biomass at the site level. The development of a generic allometric equation taking account of interspecific differences is an effective approach for accurately estimating aboveground and component biomass in boreal, temperate, and subtropical natural forests.
Estimating residual kidney function in dialysis patients without urine collection
Shafi, Tariq; Michels, Wieneke M.; Levey, Andrew S.; Inker, Lesley A.; Dekker, Friedo W.; Krediet, Raymond T.; Hoekstra, Tiny; Schwartz, George J.; Eckfeldt, John H.; Coresh, Josef
2016-01-01
Residual kidney function contributes substantially to solute clearance in dialysis patients but cannot be assessed without urine collection. We used serum filtration markers to develop dialysis-specific equations to estimate urinary urea clearance without the need for urine collection. In our development cohort, we measured 24-hour urine clearances under close supervision in 44 patients and validated these equations in 826 patients from the Netherlands Cooperative Study on the Adequacy of Dialysis. For the development and validation cohorts, median urinary urea clearance was 2.6 and 2.4 mL/min, respectively. During the 24-hour visit in the development cohort, serum β-trace protein concentrations remained in steady state but concentrations of all other markers increased. In the validation cohort, bias (median measured minus estimated clearance) was low for all equations. Precision was significantly better for β-trace protein and β2-microglobulin equations and the accuracy was significantly greater for β-trace protein, β2-microglobulin and cystatin C equations, compared with the urea plus creatinine equation. Area under the receiver operator characteristic curve for detecting measured urinary urea clearance by equation-estimated urinary urea clearance (both 2 mL/min or more) were 0.821, 0.850 and 0.796 for β-trace protein, β2-microglobulin and cystatin C equations, respectively; significantly greater than the 0.663 for the urea plus creatinine equation. Thus, residual renal function can be estimated in dialysis patients without urine collections. PMID:26924062
Estimating residual kidney function in dialysis patients without urine collection.
Shafi, Tariq; Michels, Wieneke M; Levey, Andrew S; Inker, Lesley A; Dekker, Friedo W; Krediet, Raymond T; Hoekstra, Tiny; Schwartz, George J; Eckfeldt, John H; Coresh, Josef
2016-05-01
Residual kidney function contributes substantially to solute clearance in dialysis patients but cannot be assessed without urine collection. We used serum filtration markers to develop dialysis-specific equations to estimate urinary urea clearance without the need for urine collection. In our development cohort, we measured 24-hour urine clearances under close supervision in 44 patients and validated these equations in 826 patients from the Netherlands Cooperative Study on the Adequacy of Dialysis. For the development and validation cohorts, median urinary urea clearance was 2.6 and 2.4 ml/min, respectively. During the 24-hour visit in the development cohort, serum β-trace protein concentrations remained in steady state but concentrations of all other markers increased. In the validation cohort, bias (median measured minus estimated clearance) was low for all equations. Precision was significantly better for β-trace protein and β2-microglobulin equations and the accuracy was significantly greater for β-trace protein, β2-microglobulin, and cystatin C equations, compared with the urea plus creatinine equation. Area under the receiver operator characteristic curve for detecting measured urinary urea clearance by equation-estimated urinary urea clearance (both 2 ml/min or more) were 0.821, 0.850, and 0.796 for β-trace protein, β2-microglobulin, and cystatin C equations, respectively; significantly greater than the 0.663 for the urea plus creatinine equation. Thus, residual renal function can be estimated in dialysis patients without urine collections. Copyright © 2016 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.
A generalized simplest equation method and its application to the Boussinesq-Burgers equation.
Sudao, Bilige; Wang, Xiaomin
2015-01-01
In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method.
A Generalized Simplest Equation Method and Its Application to the Boussinesq-Burgers Equation
Sudao, Bilige; Wang, Xiaomin
2015-01-01
In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method. PMID:25973605
ERIC Educational Resources Information Center
Tisdell, C. C.
2017-01-01
Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem…
[Comparison of three stand-level biomass estimation methods].
Dong, Li Hu; Li, Feng Ri
2016-12-01
At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.