Sample records for mixed estimation method

  1. Multivariate statistical approach to estimate mixing proportions for unknown end members

    USGS Publications Warehouse

    Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.

    2012-01-01

    A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.

  2. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  3. Estimation of the mixing layer height over a high altitude site in Central Himalayan region by using Doppler lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shukla, K. K.; Phanikumar, D. V.; Newsom, Rob K.

    2014-03-01

    A Doppler lidar was installed at Manora Peak, Nainital (29.4 N; 79.2 E, 1958 amsl) to estimate mixing layer height for the first time by using vertical velocity variance as basic measurement parameter for the period September-November 2011. Mixing layer height is found to be located ~0.57 +/- 0.1and 0.45 +/- 0.05km AGL during day and nighttime, respectively. The estimation of mixing layer height shows good correlation (R>0.8) between different instruments and with different methods. Our results show that wavelet co-variance transform is a robust method for mixing layer height estimation.

  4. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  5. Estimating growth and yield of mixed stands

    Treesearch

    Stephen R. Shifley; Burnell C. Fischer

    1989-01-01

    A mixed stand is defined as one in which no single species comprises more than 80 percent of the stocking. The growth estimation methods described below can be used not only in mixed stands but in almost any stand, regardless of species composition, age structure, or size structure. The methods described are necessary to accommodate the complex species mixtures and...

  6. The effect of different methods to compute N on estimates of mixing in stratified flows

    NASA Astrophysics Data System (ADS)

    Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey

    2017-11-01

    The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.

  7. A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement

    NASA Astrophysics Data System (ADS)

    Sun, Hui; Liu, Ji-Gou

    2018-07-01

    This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.

  8. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  9. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  10. A new method for estimating the turbulent heat flux at the bottom of the daily mixed layer

    NASA Technical Reports Server (NTRS)

    Imawaki, Shiro; Niiler, Pearn P.; Gautier, Catherine H.; Knox, Robert A.; Halpern, David

    1988-01-01

    Temperature data in the mixed layer and net solar irradiance data at the sea surface are used to estimate the vertical turbulent heat flux at the bottom of the daily mixed layer. The method is applied to data obtained in the eastern tropical Pacific, where the daily cycle in the temperature field is confined to the upper 10-25 m. Equatorial turbulence measurements indicate that the turbulent heat flux is much greater during nighttime than daytime.

  11. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  12. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  13. Efficiency of circulant diallels via mixed models in the selection of papaya genotypes resistant to foliar fungal diseases.

    PubMed

    Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G

    2014-07-02

    Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.

  14. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  15. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  16. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  17. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  18. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    PubMed

    Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat

    2017-05-01

    Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  19. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    NASA Astrophysics Data System (ADS)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  20. Circumpolar Estimates of Isopycnal Mixing in the ACC from Argo Floats

    NASA Astrophysics Data System (ADS)

    Roach, C. J.; Balwada, D.; Speer, K. G.

    2015-12-01

    There are few direct observations of cross-stream isopycnal mixing in the interior of the Southern Ocean, yet such measurements are needed to determine the role of eddies transporting properties across the ACC, and key to progress toward testing theories of meridional overturning. In light of this we examine if it is possible to obtain estimates of mixing from Argo float trajectories. We divided the Southern Ocean into overlapping 15ο longitude bins before estimating mixing. Resulting diffusivities ranged from 300 to 3000 m2s-1, with peaks corresponding to the Scotia Sea; Kerguelen and Campbell Plateaus. Comparison of our diffusivities with previous regional studies demonstrated good agreement. Tests of the methodology in the DIMES region found that mixing from Argo floats agreed closely with mixing from RAFOS floats. To further test the method we used the Southern Ocean State Estimate velocity fields to advect particles with Argo and RAFOS float like behaviours. Stirring estimates from the particles agreed well with each other in the Kerguelen Island region, South Pacific and Scotia Sea, despite the differences in the imposed behaviour. Finally, these estimates were compared to mixing length suppression theory presented in Ferrari and Nikurashin 2010. This mixing length suppression theory quantifies horizontal diffusivity similar to Prandtl (1925), but the mixing length is suppressed in the presence of mean flows and eddy phase speeds. Our results suggest that the theory can explain both the structure and magnitude of mixing using mean flow data. An exception is near the Kerguelen and Campbell Plateaus where theory under-estimates mixing relative to our results.

  1. Simple method for direct crown base height estimation of individual conifer trees using airborne LiDAR data.

    PubMed

    Luo, Laiping; Zhai, Qiuping; Su, Yanjun; Ma, Qin; Kelly, Maggi; Guo, Qinghua

    2018-05-14

    Crown base height (CBH) is an essential tree biophysical parameter for many applications in forest management, forest fuel treatment, wildfire modeling, ecosystem modeling and global climate change studies. Accurate and automatic estimation of CBH for individual trees is still a challenging task. Airborne light detection and ranging (LiDAR) provides reliable and promising data for estimating CBH. Various methods have been developed to calculate CBH indirectly using regression-based means from airborne LiDAR data and field measurements. However, little attention has been paid to directly calculate CBH at the individual tree scale in mixed-species forests without field measurements. In this study, we propose a new method for directly estimating individual-tree CBH from airborne LiDAR data. Our method involves two main strategies: 1) removing noise and understory vegetation for each tree; and 2) estimating CBH by generating percentile ranking profile for each tree and using a spline curve to identify its inflection points. These two strategies lend our method the advantages of no requirement of field measurements and being efficient and effective in mixed-species forests. The proposed method was applied to a mixed conifer forest in the Sierra Nevada, California and was validated by field measurements. The results showed that our method can directly estimate CBH at individual tree level with a root-mean-squared error of 1.62 m, a coefficient of determination of 0.88 and a relative bias of 3.36%. Furthermore, we systematically analyzed the accuracies among different height groups and tree species by comparing with field measurements. Our results implied that taller trees had relatively higher uncertainties than shorter trees. Our findings also show that the accuracy for CBH estimation was the highest for black oak trees, with an RMSE of 0.52 m. The conifer species results were also good with uniformly high R 2 ranging from 0.82 to 0.93. In general, our method has demonstrated high accuracy for individual tree CBH estimation and strong potential for applications in mixed species over large areas.

  2. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  3. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  5. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  6. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  7. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  8. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  9. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  10. Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins

    PubMed Central

    Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.

    2011-01-01

    Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199

  11. Analytical and quasi-Bayesian methods as development of the iterative approach for mixed radiation biodosimetry.

    PubMed

    Słonecka, Iwona; Łukasik, Krzysztof; Fornalski, Krzysztof W

    2018-06-04

    The present paper proposes two methods of calculating components of the dose absorbed by the human body after exposure to a mixed neutron and gamma radiation field. The article presents a novel approach to replace the common iterative method in its analytical form, thus reducing the calculation time. It also shows a possibility of estimating the neutron and gamma doses when their ratio in a mixed beam is not precisely known.

  12. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  13. Comparison of fundamental and simulative test methods for evaluating permanent deformation of hot mix asphalt

    DOT National Transportation Integrated Search

    2002-10-01

    Rutting has long been a problem in hot mix asphalt (HMA) pavement. Through the years, researchers have used different kinds of fundamental and simulative test methods to estimate the rutting performance of HMA. It has been recognized that most fundam...

  14. Solvency supervision based on a total balance sheet approach

    NASA Astrophysics Data System (ADS)

    Pitselis, Georgios

    2009-11-01

    In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.

  15. Crowd density estimation based on convolutional neural networks with mixed pooling

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming

    2017-09-01

    Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.

  16. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    PubMed

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses

    PubMed Central

    Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah

    2015-01-01

    Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481

  18. HIV quality report cards: impact of case-mix adjustment and statistical methods.

    PubMed

    Ohl, Michael E; Richardson, Kelly K; Goto, Michihiko; Vaughan-Sarrazin, Mary; Schweizer, Marin L; Perencevich, Eli N

    2014-10-15

    There will be increasing pressure to publicly report and rank the performance of healthcare systems on human immunodeficiency virus (HIV) quality measures. To inform discussion of public reporting, we evaluated the influence of case-mix adjustment when ranking individual care systems on the viral control quality measure. We used data from the Veterans Health Administration (VHA) HIV Clinical Case Registry and administrative databases to estimate case-mix adjusted viral control for 91 local systems caring for 12 368 patients. We compared results using 2 adjustment methods, the observed-to-expected estimator and the risk-standardized ratio. Overall, 10 913 patients (88.2%) achieved viral control (viral load ≤400 copies/mL). Prior to case-mix adjustment, system-level viral control ranged from 51% to 100%. Seventeen (19%) systems were labeled as low outliers (performance significantly below the overall mean) and 11 (12%) as high outliers. Adjustment for case mix (patient demographics, comorbidity, CD4 nadir, time on therapy, and income from VHA administrative databases) reduced the number of low outliers by approximately one-third, but results differed by method. The adjustment model had moderate discrimination (c statistic = 0.66), suggesting potential for unadjusted risk when using administrative data to measure case mix. Case-mix adjustment affects rankings of care systems on the viral control quality measure. Given the sensitivity of rankings to selection of case-mix adjustment methods-and potential for unadjusted risk when using variables limited to current administrative databases-the HIV care community should explore optimal methods for case-mix adjustment before moving forward with public reporting. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  19. Dealing with gene expression missing data.

    PubMed

    Brás, L P; Menezes, J C

    2006-05-01

    Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.

  20. Estimation of uncertainty in tracer gas measurement of air change rates.

    PubMed

    Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio

    2010-12-01

    Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of <33%. Using this method, overestimation of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.

  1. A new family of stable elements for the Stokes problem based on a mixed Galerkin/least-squares finite element formulation

    NASA Technical Reports Server (NTRS)

    Franca, Leopoldo P.; Loula, Abimael F. D.; Hughes, Thomas J. R.; Miranda, Isidoro

    1989-01-01

    Adding to the classical Hellinger-Reissner formulation, a residual form of the equilibrium equation, a new Galerkin/least-squares finite element method is derived. It fits within the framework of a mixed finite element method and is stable for rather general combinations of stress and velocity interpolations, including equal-order discontinuous stress and continuous velocity interpolations which are unstable within the Galerkin approach. Error estimates are presented based on a generalization of the Babuska-Brezzi theory. Numerical results (not presented herein) have confirmed these estimates as well as the good accuracy and stability of the method.

  2. A joint modeling and estimation method for multivariate longitudinal data with mixed types of responses to analyze physical activity data generated by accelerometers.

    PubMed

    Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E

    2017-11-10

    A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Classification of longitudinal data through a semiparametric mixed-effects model based on lasso-type estimators.

    PubMed

    Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian

    2015-06-01

    We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.

  4. Evaluation of alternative approaches for landscape-scale biomass estimation in a mixed-species northern forest

    Treesearch

    Coeli M. Hoover; Mark J. Ducey; R. Andy Colter; Mariko Yamasaki

    2018-01-01

    There is growing interest in estimating and mapping biomass and carbon content of forests across large landscapes. LiDAR-based inventory methods are increasingly common and have been successfully implemented in multiple forest types. Asner et al. (2011) developed a simple universal forest carbon estimation method for tropical forests that reduces the amount of required...

  5. An Examination of Rater Performance on a Local Oral English Proficiency Test: A Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Yan, Xun

    2014-01-01

    This paper reports on a mixed-methods approach to evaluate rater performance on a local oral English proficiency test. Three types of reliability estimates were reported to examine rater performance from different perspectives. Quantitative results were also triangulated with qualitative rater comments to arrive at a more representative picture of…

  6. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    PubMed Central

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305

  7. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  8. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  9. A mixed methods approach to assess animal vaccination programmes: The case of rabies control in Bamako, Mali.

    PubMed

    Mosimann, Laura; Traoré, Abdallah; Mauti, Stephanie; Léchenne, Monique; Obrist, Brigit; Véron, René; Hattendorf, Jan; Zinsstag, Jakob

    2017-01-01

    In the framework of the research network on integrated control of zoonoses in Africa (ICONZ) a dog rabies mass vaccination campaign was carried out in two communes of Bamako (Mali) in September 2014. A mixed method approach, combining quantitative and qualitative tools, was developed to evaluate the effectiveness of the intervention towards optimization for future scale-up. Actions to control rabies occur on one level in households when individuals take the decision to vaccinate their dogs. However, control also depends on provision of vaccination services and community participation at the intermediate level of social resilience. Mixed methods seem necessary as the problem-driven transdisciplinary project includes epidemiological components in addition to social dynamics and cultural, political and institutional issues. Adapting earlier effectiveness models for health intervention to rabies control, we propose a mixed method assessment of individual effectiveness parameters like availability, affordability, accessibility, adequacy or acceptability. Triangulation of quantitative methods (household survey, empirical coverage estimation and spatial analysis) with qualitative findings (participant observation, focus group discussions) facilitate a better understanding of the weight of each effectiveness determinant, and the underlying reasons embedded in the local understandings, cultural practices, and social and political realities of the setting. Using this method, a final effectiveness of 33% for commune Five and 28% for commune Six was estimated, with vaccination coverage of 27% and 20%, respectively. Availability was identified as the most sensitive effectiveness parameter, attributed to lack of information about the campaign. We propose a mixed methods approach to optimize intervention design, using an "intervention effectiveness optimization cycle" with the aim of maximizing effectiveness. Empirical vaccination coverage estimation is compared to the effectiveness model with its determinants. In addition, qualitative data provide an explanatory framework for deeper insight, validation and interpretation of results which should improve the intervention design while involving all stakeholders and increasing community participation. This work contributes vital information for the optimization and scale-up of future vaccination campaigns in Bamako, Mali. The proposed mixed method, although incompletely applied in this case study, should be applicable to similar rabies interventions targeting elimination in other settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Estimating age-specific reproductive numbers-A comparison of methods.

    PubMed

    Moser, Carlee B; White, Laura F

    2016-01-01

    Large outbreaks, such as those caused by influenza, put a strain on resources necessary for their control. In particular, children have been shown to play a key role in influenza transmission during recent outbreaks, and targeted interventions, such as school closures, could positively impact the course of emerging epidemics. As an outbreak is unfolding, it is important to be able to estimate reproductive numbers that incorporate this heterogeneity and to use surveillance data that is routinely collected to more effectively target interventions and obtain an accurate understanding of transmission dynamics. There are a growing number of methods that estimate age-group specific reproductive numbers with limited data that build on methods assuming a homogenously mixing population. In this article, we introduce a new approach that is flexible and improves on many aspects of existing methods. We apply this method to influenza data from two outbreaks, the 2009 H1N1 outbreaks in South Africa and Japan, to estimate age-group specific reproductive numbers and compare it to three other methods that also use existing data from social mixing surveys to quantify contact rates among different age groups. In this exercise, all estimates of the reproductive numbers for children exceeded the critical threshold of one and in most cases exceeded those of adults. We introduce a flexible new method to estimate reproductive numbers that describe heterogeneity in the population.

  11. Study on Raman spectral imaging method for simultaneous estimation of ingredients concentration in food powder

    USDA-ARS?s Scientific Manuscript database

    This study investigated the potential of point scan Raman spectral imaging method for estimation of different ingredients and chemical contaminant concentration in food powder. Food powder sample was prepared by mixing sugar, vanillin, melamine and non-dairy cream at 5 different concentrations in a ...

  12. The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models.

    PubMed

    Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J

    2009-06-01

    We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.

  13. A comparison of methods for estimating the random effects distribution of a linear mixed model.

    PubMed

    Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert

    2010-12-01

    This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.

  14. Learning Multiple Band-Pass Filters for Sleep Stage Estimation: Towards Care Support for Aged Persons

    NASA Astrophysics Data System (ADS)

    Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo

    This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.

  15. An algorithm for separation of mixed sparse and Gaussian sources

    PubMed Central

    Akkalkotkar, Ameya

    2017-01-01

    Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814

  16. An algorithm for separation of mixed sparse and Gaussian sources.

    PubMed

    Akkalkotkar, Ameya; Brown, Kevin Scott

    2017-01-01

    Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.

  17. Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.

    PubMed

    Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian

    2004-05-01

    Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.

  18. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  19. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  20. Doing more for less: identifying opportunities to expand public sector access to safe abortion in South Africa through budget impact analysis.

    PubMed

    Lince-Deroche, Naomi; Harries, Jane; Constant, Deborah; Morroni, Chelsea; Pleaner, Melanie; Fetters, Tamara; Grossman, Daniel; Blanchard, Kelly; Sinanovic, Edina

    2018-02-01

    To estimate the costs of public-sector abortion provision in South Africa and to explore the potential for expanding access at reduced cost by changing the mix of technologies used. We conducted a budget impact analysis using public sector abortion statistics and published cost data. We estimated the total costs to the public health service over 10 years, starting in South Africa's financial year 2016/17, given four scenarios: (1) holding service provision constant, (2) expanding public sector provision, (3) changing the abortion technologies used (i.e. the method mix), and (4) expansion plus changing the method mix. The public sector performed an estimated 20% of the expected total number of abortions in 2016/17; 26% and 54% of all abortions were performed illegally or in the private sector respectively. Costs were lowest in scenarios where method mix shifting occurred. Holding the proportion of abortions performed in the public-sector constant, shifting to more cost-effective service provision (more first-trimester services with more medication abortion and using the combined regimen for medical induction in the second trimester) could result in savings of $28.1 million in the public health service over the 10-year period. Expanding public sector provision through elimination of unsafe abortions would require an additional $192.5 million. South Africa can provide more safe abortions for less money in the public sector through shifting the methods provided. More research is needed to understand whether the cost of expanding access could be offset by savings from averting costs of managing unsafe abortions. South Africa can provide more safe abortions for less money in the public sector through shifting to more first-trimester methods, including more medication abortion, and shifting to a combined mifepristone plus misoprostol regimen for second trimester medical induction. Expanding access in addition to method mix changes would require additional funds. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Assessment of mixed-layer height estimation from single-wavelength ceilometer profiles

    EPA Science Inventory

    Differing boundary/mixed-layer height measurement methods were assessed in moderately polluted and clean environments, with a focus on the Vaisala CL51 ceilometer. This intercomparison was performed as part of ongoing measurements at the Chemistry And Physics of the Atmospheric B...

  2. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  3. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  4. Parameters effective on estimating a nonstationary mixed-phase wavelet using cumulant matching approach

    NASA Astrophysics Data System (ADS)

    Vosoughi, Ehsan; Javaherian, Abdolrahim

    2018-01-01

    Seismic inversion is a process performed to remove the effects of propagated wavelets in order to recover the acoustic impedance. To obtain valid velocity and density values related to subsurface layers through the inversion process, it is highly essential to perform reliable wavelet estimation such as cumulant matching approach. For this purpose, the seismic data were windowed in this work in such a way that two consecutive windows were only one sample apart. Also, we did not consider any fixed wavelet for any window and let the phase of each wavelet rotate in each sample in the window. Comparing the fourth order cumulant of the whitened trace and fourth-order moment of the all-pass operator in each window generated a cost function that should be minimized with a non-linear optimization method. In this regard, parameters effective on the estimation of the nonstationary mixed-phase wavelets were tested over the created nonstationary seismic trace at 0.82 s and 1.6 s. Besides, we compared the consequences of each parameter on estimated wavelets at two mentioned times. The parameters studied in this work are window length, taper type, the number of iteration, signal-to-noise ratio, bandwidth to central frequency ratio, and Q factor. The results show that applying the optimum values of the effective parameters, the average correlation of the estimated mixed-phase wavelets with the original ones is about 87%. Moreover, the effectiveness of the proposed approach was examined on a synthetic nonstationary seismic section with variable Q factor values alongside the time and offset axis. Eventually, the cumulant matching method was applied on a cross line of the migrated data from a 3D data set of an oilfield in the Persian Gulf. Also, the effect of the wrong Q estimation on the estimated mixed-phase wavelet was considered on the real data set. It is concluded that the accuracy of the estimated wavelet relied on the estimated Q and more than 10% error in the estimated value of Q is acceptable. Eventually, an 88% correlation was found between the estimated mixed-phase wavelets and the original ones for three horizons. The estimated wavelets applied to the data and the result of deconvolution processes was presented.

  5. Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks

    PubMed Central

    Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh

    2017-01-01

    In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950

  6. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    PubMed

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  7. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  8. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  9. Mixed Estimation for a Forest Survey Sample Design

    Treesearch

    Francis A. Roesch

    1999-01-01

    Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...

  10. Attempts at estimating mixed venous carbon dioxide tension by the single-breath method.

    PubMed

    Ohta, H; Takatani, O; Matsuoka, T

    1989-01-01

    The single-breath method was originally proposed by Kim et al. [1] for estimating the blood carbon dioxide tension and cardiac output. Its reliability has not been proven. The present study was undertaken, using dogs, to compare the mixed venous carbon dioxide tension (PVCO2) calculated by the single-breath method with the PVCO2 measured in mixed venous blood, and to evaluate the influence of variations in the exhalation duration and the volume of expired air usually discarded from computations as the deadspace. Among the exhalation durations of 15, 30 and 45 s tested, the 15 s duration was found to be too short to obtain an analyzable O2-CO2 curve, but at either 30 or 45 s, the calculated values of PVCO2 were comparable to the measured PVCO2. A significant agreement between calculated and measured PVCO2 was obtained when the expired gas with PCO2 less than 22 Torr was considered as deadspace gas.

  11. How we compute N matters to estimates of mixing in stratified flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  12. How we compute N matters to estimates of mixing in stratified flows

    DOE PAGES

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...

    2017-10-13

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  13. Comparison of Methods for Analyzing Left-Censored Occupational Exposure Data

    PubMed Central

    Huynh, Tran; Ramachandran, Gurumurthy; Banerjee, Sudipto; Monteiro, Joao; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2014-01-01

    The National Institute for Environmental Health Sciences (NIEHS) is conducting an epidemiologic study (GuLF STUDY) to investigate the health of the workers and volunteers who participated from April to December of 2010 in the response and cleanup of the oil release after the Deepwater Horizon explosion in the Gulf of Mexico. The exposure assessment component of the study involves analyzing thousands of personal monitoring measurements that were collected during this effort. A substantial portion of these data has values reported by the analytic laboratories to be below the limits of detection (LOD). A simulation study was conducted to evaluate three established methods for analyzing data with censored observations to estimate the arithmetic mean (AM), geometric mean (GM), geometric standard deviation (GSD), and the 95th percentile (X0.95) of the exposure distribution: the maximum likelihood (ML) estimation, the β-substitution, and the Kaplan–Meier (K-M) methods. Each method was challenged with computer-generated exposure datasets drawn from lognormal and mixed lognormal distributions with sample sizes (N) varying from 5 to 100, GSDs ranging from 2 to 5, and censoring levels ranging from 10 to 90%, with single and multiple LODs. Using relative bias and relative root mean squared error (rMSE) as the evaluation metrics, the β-substitution method generally performed as well or better than the ML and K-M methods in most simulated lognormal and mixed lognormal distribution conditions. The ML method was suitable for large sample sizes (N ≥ 30) up to 80% censoring for lognormal distributions with small variability (GSD = 2–3). The K-M method generally provided accurate estimates of the AM when the censoring was <50% for lognormal and mixed distributions. The accuracy and precision of all methods decreased under high variability (GSD = 4 and 5) and small to moderate sample sizes (N < 20) but the β-substitution was still the best of the three methods. When using the ML method, practitioners are cautioned to be aware of different ways of estimating the AM as they could lead to biased interpretation. A limitation of the β-substitution method is the absence of a confidence interval for the estimate. More research is needed to develop methods that could improve the estimation accuracy for small sample sizes and high percent censored data and also provide uncertainty intervals. PMID:25261453

  14. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  15. MIXED-STATUS FAMILIES AND WIC UPTAKE: THE EFFECTS OF RISK OF DEPORTATION ON PROGRAM USE

    PubMed Central

    Vargas, Edward D.; Pirog, Maureen A.

    2016-01-01

    Objective Develop and test measures of risk of deportation and mixed-status families on WIC uptake. Mixed-status is a situation in which some family members are U.S. citizens and other family members are in the U.S. without proper authorization. Methods Estimate a series of logistic regressions to estimate WIC uptake by merging data from Fragile Families and Child Well-being Survey with deportation data from U.S.-Immigration Customs and Enforcement. Results The findings of this study suggest that risk of deportation is negatively associated with WIC uptake and among mixed-status families; Mexican origin families are the most sensitive when it comes to deportations and program use. Conclusion Our analysis provides a typology and framework to study mixed-status families and evaluate their usage of social services by including an innovative measure of risk of deportation. PMID:27642194

  16. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  17. An Investigation of the Accuracy of Alternative Methods of True Score Estimation in High-Stakes Mixed-Format Examinations.

    ERIC Educational Resources Information Center

    Klinger, Don A.; Rogers, W. Todd

    2003-01-01

    The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…

  18. A Simulation Study on Methods of Correcting for the Effects of Extreme Response Style

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Böhnke, Jan R.; Rose, Norman

    2016-01-01

    The impact of response styles such as extreme response style (ERS) on trait estimation has long been a matter of concern to researchers and practitioners. This simulation study investigated three methods that have been proposed for the correction of trait estimates for ERS effects: (a) mixed Rasch models, (b) multidimensional item response models,…

  19. Productivity growth in outpatient child and adolescent mental health services: the impact of case-mix adjustment.

    PubMed

    Halsteinli, Vidar; Kittelsen, Sverre A; Magnussen, Jon

    2010-02-01

    The performance of health service providers may be monitored by measuring productivity. However, the policy value of such measures may depend crucially on the accuracy of input and output measures. In particular, an important question is how to adjust adequately for case-mix in the production of health care. In this study, we assess productivity growth in Norwegian outpatient child and adolescent mental health service units (CAMHS) over a period characterized by governmental utilization of simple productivity indices, a substantial increase in capacity and a concurrent change in case-mix. We analyze the sensitivity of the productivity growth estimates using different specifications of output to adjust for case-mix differences. Case-mix adjustment is achieved by distributing patients into eight groups depending on reason for referral, age and gender, as well as correcting for the number of consultations. We utilize the nonparametric Data Envelopment Analysis (DEA) method to implicitly calculate weights that maximize each unit's efficiency. Malmquist indices of technical productivity growth are estimated and bootstrap procedures are performed to calculate confidence intervals and to test alternative specifications of outputs. The dataset consist of an unbalanced panel of 48-60 CAMHS in the period 1998-2006. The mean productivity growth estimate from a simple unadjusted patient model (one single output) is 35%; adjusting for case-mix (eight outputs) reduces the growth estimate to 15%. Adding consultations increases the estimate to 28%. The latter reflects an increase in number of consultations per patient. We find that the governmental productivity indices strongly tend to overestimate productivity growth. Case-mix adjustment is of major importance and governmental utilization of performance indicators necessitates careful considerations of output specifications. Copyright 2009 Elsevier Ltd. All rights reserved.

  20. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Adaptive mixed finite element methods for Darcy flow in fractured porous media

    NASA Astrophysics Data System (ADS)

    Chen, Huangxin; Salama, Amgad; Sun, Shuyu

    2016-10-01

    In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.

  2. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    PubMed

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  3. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R.M. 2007. Robust estimation of the variogram by residual maximum likelihood. Geoderma 140: 62-72. Richardson, A.M. and Welsh, A.H. 1995. Robust restricted maximum likelihood in mixed linear models. Biometrics 51: 1429-1439. Welsh, A.H. and Richardson, A.M. 1997. Approaches to the robust estimation of mixed models. In: Handbook of Statistics Vol. 15, Elsevier, pp. 343-384.

  4. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  5. Estimating continuous floodplain and major river bed topography mixing ordinal coutour lines and topographic points

    NASA Astrophysics Data System (ADS)

    Bailly, J. S.; Dartevelle, M.; Delenne, C.; Rousseau, A.

    2017-12-01

    Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.

  6. Estimating continuous floodplain and major river bed topography mixing ordinal coutour lines and topographic points

    NASA Astrophysics Data System (ADS)

    Brown, T. G.; Lespez, L.; Sear, D. A.; Houben, P.; Klimek, K.

    2016-12-01

    Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.

  7. Performance of nonlinear mixed effects models in the presence of informative dropout.

    PubMed

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  8. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  9. A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2016-01-01

    Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.

  10. Separation of pedogenic and lithogenic components of magnetic susceptibility in the Chinese loess/palaeosol sequence as determined by the CBD procedure and a mixing analysis

    NASA Astrophysics Data System (ADS)

    Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.

    2000-08-01

    Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.

  11. Analysis of mixing-layer height retrieval methods using backscatter lidar returns and microwave-radiometer temperature observations in the context of synergy

    NASA Astrophysics Data System (ADS)

    Saeed, Umar; Rocadenbosch, Francesc

    2017-04-01

    Mixing Layer Height (MLH) is an important parameter in many different atmospheric and meteorological applications. However, there does not exist a single instrument or method which provides accurate and physically consistent estimates of MLH. Instead, there are several methods for MLH estimation based on the measurements of different atmospheric tracers using different instruments [1, 2]. In this work, MLH retrieval methods using backscattered lidar signals and Microwave Radiometer (MWR)-retrieved potential-temperature profiles are compared in terms of their associated uncertainties. The Extended Kalman Filter (EKF) is used for MLH retrieval from backscattered lidar signals [3] and parcel method [4] is used for MLH retrieval from MWR-retrieved potential-temperature profiles. Measurement and retrieval errors are revisited and incorporated into the MLH estimation methods used. Uncertainties on MLH estimates from the two methods are compared along with a combined MLH-retrieval discussion case. The uncertainty analysis is validated using long-term lidar and MWR measurement data, under different atmospheric conditions, from the HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [5]. MLH estimates from a Doppler wind lidar and radiosondes are used as reference. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] S. Emeis, Surface-based Remote Sensing of the Atmospheric Boundary Layer. 978-90-481-9339-4, Springer, 2010. [2] P. Seibert, F. Beyrich, S.-E. Gryning, S. Joffre, A. Rasmussen, and P. Tercier, "Review and intercomparison of operational methods for the determination of the mixing height," Atmospheric Environment, vol. 34, pp. 1352-2310, 2000. [3] D. Lange, J. Tiana-Alsina, U. Saeed, S. Tomás, and F. Rocadenbosch, "Atmospheric-boundary-layer height monitoring using a Kalman filter and backscatter lidar returns," IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 8, pp. 4717-4728, 2014. [4] G. Holzworth, "Estimates of mean maximum mixing depths in the contiguous United States," Monthly Weather Review, vol. 92, pp. 235-242, 1964. [5] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.

  12. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  13. A simple method for estimation of coagulation efficiency in mixed aerosols. [environmental pollution control

    NASA Technical Reports Server (NTRS)

    Dimmick, R. L.; Boyd, A.; Wolochow, H.

    1975-01-01

    Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.

  14. Nonlinear mixed modeling of basal area growth for shortleaf pine

    Treesearch

    Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin

    2008-01-01

    Mixed model estimation methods were used to fit individual-tree basal area growth models to tree and stand-level measurements available from permanent plots established in naturally regenerated shortleaf pine (Pinus echinata Mill.) even-aged stands in western Arkansas and eastern Oklahoma in the USA. As a part of the development of a comprehensive...

  15. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    ERIC Educational Resources Information Center

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  16. Global Ocean Circulation in Thermohaline Coordinates and Small-scale and Mesoscale mixing: An Inverse Estimate.

    NASA Astrophysics Data System (ADS)

    Groeskamp, S.; Zika, J. D.; McDougall, T. J.; Sloyan, B.

    2016-02-01

    I will present results of a new inverse technique that infers small-scale turbulent diffusivities and mesoscale eddy diffusivities from an ocean climatology of Salinity (S) and Temperature (T) in combination with surface freshwater and heat fluxes.First, the ocean circulation is represented in (S,T) coordinates, by the diathermohaline streamfunction. Framing the ocean circulation in (S,T) coordinates, isolates the component of the circulation that is directly related to water-mass transformation.Because water-mass transformation is directly related to fluxes of salt and heat, this framework allows for the formulation of an inverse method in which the diathermohaline streamfunction is balanced with known air-sea forcing and unknown mixing. When applying this inverse method to observations, we obtain observationally based estimates for both the streamfunction and the mixing. The results reveal new information about the component of the global ocean circulation due to water-mass transformation and its relation to surface freshwater and heat fluxes and small-scale and mesoscale mixing. The results provide global constraints on spatially varying patterns of diffusivities, in order to obtain a realistic overturning circulation. We find that mesoscale isopycnal mixing is much smaller than expected. These results are important for our understanding of the relation between global ocean circulation and mixing and may lead to improved parameterisations in numerical ocean models.

  17. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  18. Self-rated health: small area large area comparisons amongst older adults at the state, district and sub-district level in India.

    PubMed

    Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi

    2014-03-01

    We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.

  19. Discrete element simulation of charging and mixed layer formation in the ironmaking blast furnace

    NASA Astrophysics Data System (ADS)

    Mitra, Tamoghna; Saxén, Henrik

    2016-11-01

    The burden distribution in the ironmaking blast furnace plays an important role for the operation as it affects the gas flow distribution, heat and mass transfer, and chemical reactions in the shaft. This work studies certain aspects of burden distribution by small-scale experiments and numerical simulation by the discrete element method (DEM). Particular attention is focused on the complex layer-formation process and the problems associated with estimating the burden layer distribution by burden profile measurements. The formation of mixed layers is studied, and a computational method for estimating the extent of the mixed layer, as well as its voidage, is proposed and applied on the results of the DEM simulations. In studying a charging program and its resulting burden distribution, the mixed layers of coke and pellets were found to show lower voidage than the individual burden layers. The dynamic evolution of the mixed layer during the charging process is also analyzed. The results of the study can be used to gain deeper insight into the complex charging process of the blast furnace, which is useful in the design of new charging programs and for mathematical models that do not consider the full behavior of the particles in the burden layers.

  20. Mixing the Green-Ampt model and Curve Number method as an empirical tool for rainfall excess estimation in small ungauged catchments.

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-04-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.

  1. Estimation water vapor content using the mixing ratio method and validated with the ANFIS PWV model

    NASA Astrophysics Data System (ADS)

    Suparta, W.; Alhasa, K. M.; Singh, M. S. J.

    2017-05-01

    This study reported the comparison between water vapor content, the surface meteorological data (pressure, temperature, and relative humidity), and precipitable water vapor (PWV) produced by PWV from adaptive neuro fuzzy inference system (ANFIS) for areas in the Universiti Kebangsaan Malaysia Bangi (UKMB) station. The water vapor content value was estimated with mixing ratio method and the surface meteorological data as the parameter inputs. The accuracy of water vapor content was validated with PWV from ANFIS PWV model for the period of 20-23 December 2016. The result showed that the water vapor content has a similar trend with the PWV which produced by ANFIS PWV model (r = 0.975 at the 99% confidence level). This indicates that the water vapor content that obtained with mixing ratio agreed very well with the ANFIS PWV model. In addition, this study also found, the pattern of water vapor content and PWV have more influenced by the relative humidity.

  2. Inter-provider comparison of patient-reported outcomes: developing an adjustment to account for differences in patient case mix.

    PubMed

    Nuttall, David; Parkin, David; Devlin, Nancy

    2015-01-01

    This paper describes the development of a methodology for the case-mix adjustment of patient-reported outcome measures (PROMs) data permitting the comparison of outcomes between providers on a like-for-like basis. Statistical models that take account of provider-specific effects form the basis of the proposed case-mix adjustment methodology. Indirect standardisation provides a transparent means of case mix adjusting the PROMs data, which are updated on a monthly basis. Recently published PROMs data for patients undergoing unilateral knee replacement are used to estimate empirical models and to demonstrate the application of the proposed case-mix adjustment methodology in practice. The results are illustrative and are used to highlight a number of theoretical and empirical issues that warrant further exploration. For example, because of differences between PROMs instruments, case-mix adjustment methodologies may require instrument-specific approaches. A number of key assumptions are made in estimating the empirical models, which could be open to challenge. The covariates of post-operative health status could be expanded, and alternative econometric methods could be employed. © 2013 Crown copyright.

  3. Software engineering the mixed model for genome-wide association studies on large samples.

    PubMed

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  4. Analysis of the Δ(X) - L intervalley mixing in group-IV heterostructures

    NASA Astrophysics Data System (ADS)

    Kiselev, A. A.; Kim, K. W.; Yablonovitch, E.

    2005-06-01

    We provide a treatment of the problem of Δ(X) - L intervalley mixing in differently oriented SiGe heterostructures in the transparent effective mass method. Mixing potentials can be calculated, considering changes in the constituent Concentrations of individual heterolayers from some "virtual crystal level" as a bunch of microscopic single-ion perturbations. Strong mixing between lowest localized Δ and L states can be achieved in (113) structures, making them favorable for the electrically controlled gigantic intervalley g factor modulation. We provide estimates for the mixing potential and further consider limitations related to the strength of the in-plane localization and quality of the interface.

  5. Bias in diet determination: incorporating traditional methods in Bayesian mixing models.

    PubMed

    Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo

    2013-01-01

    There are not "universal methods" to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators' diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal's diet the sea lion's did not have a clear dominance of any prey. In contrast, SIMM-IP's diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs' estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys' contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators' diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and weight the information yielded by each method.

  6. Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.

    PubMed

    Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2012-06-01

    This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.

  7. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  8. Determination of timescales of nitrate contamination by groundwater age models in a complex aquifer system

    NASA Astrophysics Data System (ADS)

    Koh, E. H.; Lee, E.; Kaown, D.; Lee, K. K.; Green, C. T.

    2017-12-01

    Timing and magnitudes of nitrate contamination are determined by various factors like contaminant loading, recharge characteristics and geologic system. Information of an elapsed time since recharged water traveling to a certain outlet location, which is defined as groundwater age, can provide indirect interpretation related to the hydrologic characteristics of the aquifer system. There are three major methods (apparent ages, lumped parameter model, and numerical model) to date groundwater ages, which differently characterize groundwater mixing resulted by various groundwater flow pathways in a heterogeneous aquifer system. Therefore, in this study, we compared the three age models in a complex aquifer system by using observed age tracer data and reconstructed history of nitrate contamination by long-term source loading. The 3H-3He and CFC-12 apparent ages, which did not consider the groundwater mixing, estimated the most delayed response time and a highest period of the nitrate loading had not reached yet. However, the lumped parameter model could generate more recent loading response than the apparent ages and the peak loading period influenced the water quality. The numerical model could delineate various groundwater mixing components and its different impacts on nitrate dynamics in the complex aquifer system. The different age estimation methods lead to variations in the estimated contaminant loading history, in which the discrepancy in the age estimation was dominantly observed in the complex aquifer system.

  9. Development of a shortleaf pine individual-tree growth equation using non-linear mixed modeling techniques

    Treesearch

    Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin

    2010-01-01

    Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...

  10. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    PubMed

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  11. Estimation of Groundwater Recharge at Pahute Mesa using the Chloride Mass-Balance Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Clay A; Hershey, Ronald L; Healey, John M

    Groundwater recharge on Pahute Mesa was estimated using the chloride mass-balance (CMB) method. This method relies on the conservative properties of chloride to trace its movement from the atmosphere as dry- and wet-deposition through the soil zone and ultimately to the saturated zone. Typically, the CMB method assumes no mixing of groundwater with different chloride concentrations; however, because groundwater is thought to flow into Pahute Mesa from valleys north of Pahute Mesa, groundwater flow rates (i.e., underflow) and chloride concentrations from Kawich Valley and Gold Flat were carefully considered. Precipitation was measured with bulk and tipping-bucket precipitation gauges installed formore » this study at six sites on Pahute Mesa. These data, along with historical precipitation amounts from gauges on Pahute Mesa and estimates from the PRISM model, were evaluated to estimate mean annual precipitation. Chloride deposition from the atmosphere was estimated by analyzing quarterly samples of wet- and dry-deposition for chloride in the bulk gauges and evaluating chloride wet-deposition amounts measured at other locations by the National Atmospheric Deposition Program. Mean chloride concentrations in groundwater were estimated using data from the UGTA Geochemistry Database, data from other reports, and data from samples collected from emplacement boreholes for this study. Calculations were conducted assuming both no underflow and underflow from Kawich Valley and Gold Flat. Model results estimate recharge to be 30 mm/yr with a standard deviation of 18 mm/yr on Pahute Mesa, for elevations >1800 m amsl. These estimates assume Pahute Mesa recharge mixes completely with underflow from Kawich Valley and Gold Flat. The model assumes that precipitation, chloride concentration in bulk deposition, underflow and its chloride concentration, have been constant over the length of time of recharge.« less

  12. Regression analysis of mixed recurrent-event and panel-count data

    PubMed Central

    Zhu, Liang; Tong, Xinwei; Sun, Jianguo; Chen, Manhua; Srivastava, Deo Kumar; Leisenring, Wendy; Robison, Leslie L.

    2014-01-01

    In event history studies concerning recurrent events, two types of data have been extensively discussed. One is recurrent-event data (Cook and Lawless, 2007. The Analysis of Recurrent Event Data. New York: Springer), and the other is panel-count data (Zhao and others, 2010. Nonparametric inference based on panel-count data. Test 20, 1–42). In the former case, all study subjects are monitored continuously; thus, complete information is available for the underlying recurrent-event processes of interest. In the latter case, study subjects are monitored periodically; thus, only incomplete information is available for the processes of interest. In reality, however, a third type of data could occur in which some study subjects are monitored continuously, but others are monitored periodically. When this occurs, we have mixed recurrent-event and panel-count data. This paper discusses regression analysis of such mixed data and presents two estimation procedures for the problem. One is a maximum likelihood estimation procedure, and the other is an estimating equation procedure. The asymptotic properties of both resulting estimators of regression parameters are established. Also, the methods are applied to a set of mixed recurrent-event and panel-count data that arose from a Childhood Cancer Survivor Study and motivated this investigation. PMID:24648408

  13. Bias in Diet Determination: Incorporating Traditional Methods in Bayesian Mixing Models

    PubMed Central

    Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G.; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo

    2013-01-01

    There are not “universal methods” to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators’ diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal’s diet the sea lion’s did not have a clear dominance of any prey. In contrast, SIMM-IP’s diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs’ estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys’ contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators’ diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and weight the information yielded by each method. PMID:24224031

  14. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  15. Amperometric Enzyme Sensor to Check the Total Antioxidant Capacity of Several Mixed Berries. Comparison with Two Other Spectrophotometric and Fluorimetric Methods

    PubMed Central

    Tomassetti, Mauro; Serone, Maruschka; Angeloni, Riccardo; Campanella, Luigi; Mazzone, Elisa

    2015-01-01

    The aim of this research was to test the correctness of response of a superoxide dismutase amperometric biosensor used for the purpose of measuring and ranking the total antioxidant capacity of several systematically analysed mixed berries. Several methods are described in the literature for determining antioxidant capacity, each culminating in the construction of an antioxidant capacity scale and each using its own unit of measurement. It was therefore endeavoured to correlate and compare the results obtained using the present amperometric biosensor method with those resulting from two other different methods for determining the total antioxidant capacity selected from among those more frequently cited in the literature. The purpose was to establish a methodological approach consisting in the simultaneous application of different methods that it would be possible to use to obtain an accurate estimation of the total antioxidant capacity of different mixed berries and the food products containing them. Testing was therefore extended to also cover jams, yoghurts and juices containing mixed berries. PMID:25654720

  16. Tidal Energy Available for Deep Ocean Mixing: Bounds from Altimetry Data

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Ray, Richard D.

    1999-01-01

    Maintenance of the large-scale thermohaline circulation has long presented a problem to oceanographers. Observed mixing rates in the pelagic ocean are an order of magnitude too small to balance the rate at which dense bottom water is created at high latitudes. Recent observational and theoretical work suggests that much of this mixing may occur in hot spots near areas of rough topography (e.g., mid-ocean ridges and island arcs). Barotropic tidal currents provide a very plausible source of energy to maintain these mixing processes. Topex/Poseidon (T/P) satellite altimetry data have made precise mapping of open ocean tidal elevations possible for the first time. We can thus obtain empirical, spatially localized, estimates of barotropic tidal dissipation. These provide an upper bound on the amount of tidal energy that is dissipated in the deep ocean, and hence is available for deep mixing. We will present and compare maps of open ocean tidal energy flux divergence, and estimates of tidal energy flux into shallow seas, derived from T/P altimetry data using both formal data assimilation methods and empirical approaches. With the data assimilation methods we can place formal error bars on the fluxes. Our results show that 20-25% of tidal energy dissipation occurs outside of the shallow seas, the traditional sink for tidal energy. This suggests that up to 1 TW of energy may be available from the tides (lunar and solar) for mixing the deep ocean. The dissipation indeed appears to be concentrated over areas of rough topography.

  17. Tidal Energy Available for Deep Ocean Mixing: Bounds From Altimetry Data

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Ray, Richard D.

    1999-01-01

    Maintenance of the large-scale thermohaline circulation has long presented a problem to oceanographers. Observed mixing rates in the pelagic ocean are an order of magnitude too small to balance the rate at which dense bottom water is created at high latitudes. Recent observational and theoretical work suggests that much of this mixing may occur in hot spots near areas of rough topography (e.g., mid-ocean ridges and island arcs). Barotropic tidal currents provide a very plausible source of energy to maintain these mixing processes. Topex/Poseidon satellite altimetry data have made precise mapping of open ocean tidal elevations possible for the first time. We can thus obtain empirical, spatially localized, estimates of barotropic tidal dissipation. These provide an upper bound on the amount of tidal energy that is dissipated in the deep ocean, and hence is available for deep mixing. We will present and compare maps of open ocean tidal energy flux divergence, and estimates of tidal energy flux into shallow seas, derived from T/P altimetry data using both formal data assimilation methods and empirical approaches. With the data assimilation methods we can place formal error bars on the fluxes. Our results show that 20-25% of tidal energy dissipation occurs outside of the shallow seas, the traditional sink for tidal energy. This suggests that up to 1 TW of energy may be available from the tides (lunar and solar) for mixing the deep ocean. The dissipation indeed appears to be concentrated over areas of rough topography.

  18. Tidal Energy Available for Deep Ocean Mixing: Bounds from Altimetry Data

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Egbert, Gary D.

    1999-01-01

    Maintenance of the large-scale thermohaline circulation has long presented an interesting problem. Observed mixing rates in the pelagic ocean are an order of magnitude too small to balance the rate at which dense bottom water is created at high latitudes. Recent observational and theoretical work suggests that much of this mixing may occur in hot spots near areas of rough topography (e.g., mid-ocean ridges and island arcs). Barotropic tidal currents provide a very plausible source of energy to maintain these mixing processes. Topex/Poseidon satellite altimetry data have made precise mapping of open ocean tidal elevations possible for the first time. We can thus obtain empirical, spatially localized, estimates of barotropic tidal dissipation. These provide an upper bound on the amount of tidal energy that is dissipated in the deep ocean, and hence is available for deep mixing. We will present and compare maps of open ocean tidal energy flux divergence, and estimates of tidal energy flux into shallow seas, derived from T/P altimetry data using both formal data assimilation methods and empirical approaches. With the data assimilation methods we can place formal error bars on the fluxes. Our results show that 20-25% of tidal energy dissipation occurs outside of the shallow seas, the traditional sink for tidal energy. This suggests that up to 1 TW of energy may be available from the tides (lunar and solar) for mixing the deep ocean. The dissipation indeed appears to be concentrated over areas of rough topography.

  19. Estimation of bias with the single-zone assumption in measurement of residential air exchange using the perfluorocarbon tracer gas method.

    PubMed

    Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J

    2015-12-01

    Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER(1z)). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER(2z)). In total, 287 daily pairs of AER(2z) and AER(1z) estimates were made from 35 homes across three cities. In 87% of the cases, AER(2z) was higher than AER(1z). Overall, the AER(1z) estimates underestimated AER(2z) by approximately 16% (IQR: 5-32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. The results of this study suggest that the long-standing assumption that a home represents a single well-mixed air zone may result in a substantial negative bias in air exchange estimates. Indoor air quality professionals should take this finding into consideration when developing study designs or making decisions related to the recommendation and installation of residential ventilation systems. © 2014 Her Majesty the Queen in Right of Canada. Indoor Air published by John Wiley & Sons Ltd Reproduced with the permission of the Minister of Health Canada.

  20. A novel approach to mixing qualitative and quantitative methods in HIV and STI prevention research.

    PubMed

    Penman-Aguilar, Ana; Macaluso, Maurizio; Peacock, Nadine; Snead, M Christine; Posner, Samuel F

    2014-04-01

    Mixed-method designs are increasingly used in sexually transmitted infection (STI) and HIV prevention research. The authors designed a mixedmethod approach and applied it to estimate and evaluate a predictor of continued female condom use (6+ uses, among those who used it at least once) in a 6-month prospective cohort study. The analysis included 402 women who received an intervention promoting use of female and male condoms for STI prevention and completed monthly quantitative surveys; 33 also completed a semistructured qualitative interview. The authors identified a qualitative theme (couples' female condom enjoyment [CFCE]), applied discriminant analysis techniques to estimate CFCE for all participants, and added CFCE to a multivariable logistic regression model of continued female condom use. CFCE related to comfort, naturalness, pleasure, feeling protected, playfulness, ease of use, intimacy, and feeling in control of protection. CFCE was associated with continued female condom use (adjusted odds ratio: 2.8, 95% confidence interval: 1.4-5.6) and significantly improved model fit (p < .001). CFCE predicted continued female condom use. Mixed-method approaches for "scaling up" qualitative findings from small samples to larger numbers of participants can benefit HIV and STI prevention research.

  1. Aliasing Signal Separation of Superimposed Abrasive Debris Based on Degenerate Unmixing Estimation Technique.

    PubMed

    Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei

    2018-03-15

    Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.

  2. Estimation of bias with the single-zone assumption in measurement of residential air exchange using the perfluorocarbon tracer gas method

    PubMed Central

    Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J

    2015-01-01

    Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER1z). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER2z). In total, 287 daily pairs of AER2z and AER1z estimates were made from 35 homes across three cities. In 87% of the cases, AER2z was higher than AER1z. Overall, the AER1z estimates underestimated AER2z by approximately 16% (IQR: 5–32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. PMID:25399878

  3. Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images

    NASA Astrophysics Data System (ADS)

    Sohrabi, H.

    2012-07-01

    In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.

  4. Discrimination of Mixed Taste Solutions using Ultrasonic Wave and Soft Computing

    NASA Astrophysics Data System (ADS)

    Kojima, Yohichiro; Kimura, Futoshi; Mikami, Tsuyoshi; Kitama, Masataka

    In this study, ultrasonic wave acoustic properties of mixed taste solutions were investigated, and the possibility of taste sensing based on the acoustical properties obtained was examined. In previous studies, properties of solutions were discriminated based on sound velocity, amplitude and frequency characteristics of ultrasonic waves propagating through the five basic taste solutions and marketed beverages. However, to make this method applicable to beverages that contain many taste substances, further studies are required. In this paper, the waveform of an ultrasonic wave with frequency of approximately 5 MHz propagating through mixed solutions composed of sweet and salty substance was measured. As a result, differences among solutions were clearly observed as differences in their properties. Furthermore, these mixed solutions were discriminated by a self-organizing neural network. The ratio of volume in their mixed solutions was estimated by a distance-type fuzzy reasoning method. Therefore, the possibility of taste sensing was shown by using ultrasonic wave acoustic properties and the soft computing, such as the self-organizing neural network and the distance-type fuzzy reasoning method.

  5. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  6. Biological effects of mixed-ion beams. Part 1: Effect of irradiation of the CHO-K1 cells with a mixed-ion beam containing the carbon and oxygen ions.

    PubMed

    Czub, Joanna; Banaś, Dariusz; Braziewicz, Janusz; Buraczewska, Iwona; Jaskóła, Marian; Kaźmierczak, Urszula; Korman, Andrzej; Lankoff, Anna; Lisowska, Halina; Szefliński, Zygmunt; Wojewódzka, Maria; Wójcik, Andrzej

    2018-05-30

    Carbon and oxygen ions were accelerated simultaneously to estimate the effect of irradiation of living cells with the two different ions. This mixed ion beam was used to irradiate the CHO-K1 cells, and a survival test was performed. The type of the effect of the mixed ion beam on the cells was determined with the isobologram method, whereby survival curves for irradiations with individual ion beams were also used. An additive effect of irradiation with the two ions was found. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Estimating the asbestos-related lung cancer burden from mesothelioma mortality

    PubMed Central

    McCormack, V; Peto, J; Byrnes, G; Straif, K; Boffetta, P

    2012-01-01

    Background: Quantifying the asbestos-related lung cancer burden is difficult in the presence of this disease's multiple causes. We explore two methods to estimate this burden using mesothelioma deaths as a proxy for asbestos exposure. Methods: From the follow-up of 55 asbestos cohorts, we estimated ratios of (i) absolute number of asbestos-related lung cancers to mesothelioma deaths; (ii) excess lung cancer relative risk (%) to mesothelioma mortality per 1000 non-asbestos-related deaths. Results: Ratios varied by asbestos type; there were a mean 0.7 (95% confidence interval 0.5, 1.0) asbestos-related lung cancers per mesothelioma death in crocidolite cohorts (n=6 estimates), 6.1 (3.6, 10.5) in chrysotile (n=16), 4.0 (2.8, 5.9) in amosite (n=4) and 1.9 (1.4, 2.6) in mixed asbestos fibre cohorts (n=31). In a population with 2 mesothelioma deaths per 1000 deaths at ages 40–84 years (e.g., US men), the estimated lung cancer population attributable fraction due to mixed asbestos was estimated to be 4.0%. Conclusion: All types of asbestos fibres kill at least twice as many people through lung cancer than through mesothelioma, except for crocidolite. For chrysotile, widely consumed today, asbestos-related lung cancers cannot be robustly estimated from few mesothelioma deaths and the latter cannot be used to infer no excess risk of lung or other cancers. PMID:22233924

  8. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  9. Tracking control of WMRs on loose soil based on mixed H2/H∞ control with longitudinal slip ratio estimation

    NASA Astrophysics Data System (ADS)

    Gao, Haibo; Chen, Chao; Ding, Liang; Li, Weihua; Yu, Haitao; Xia, Kerui; Liu, Zhen

    2017-11-01

    Wheeled mobile robots (WMRs) often suffer from the longitudinal slipping when moving on the loose soil of the surface of the moon during exploration. Longitudinal slip is the main cause of WMRs' delay in trajectory tracking. In this paper, a nonlinear extended state observer (NESO) is introduced to estimate the longitudinal velocity in order to estimate the slip ratio and the derivative of the loss of velocity which are used in modelled disturbance compensation. Owing to the uncertainty and disturbance caused by estimation errors, a multi-objective controller using the mixed H2/H∞ method is employed to ensure the robust stability and performance of the WMR system. The final inputs of the trajectory tracking consist of the feedforward compensation, compensation for the modelled disturbances and designed multi-objective control inputs. Finally, the simulation results demonstrate the effectiveness of the controller, which exhibits a satisfactory tracking performance.

  10. Measurement of formaldehyde in clean air

    NASA Astrophysics Data System (ADS)

    Neitzert, Volker; Seiler, Wolfgang

    1981-01-01

    A method for the measurement of small amounts of formaldehyde in air has been developed. The method is based on the derivatization of HCHO with 2.4-Dinitrophenylhydrazine, forming 2.4-Dinitrophenylhydrazone, measured with GC-ECD-technique. HCHO is preconcentrated using a cryogenic sampling technique. The detection limit is 0.05 ppbv for a sampling volume of 200 liter. The method has been applied for measurements in continental and marine air masses showing HCHO mixing ratios of 0.4 - 5.0 ppbv and 0.2 - 1.0 ppbv, respectively. HCHO mixing ratios show diurnal variations with maximum values during the early afternoon and minimum values during the early morning. In continental air, HCHO mixing ratios are positively correlated with CO and SO2, indicating anthropogenic HCHO sources which are estimated to be 6-11 × 1012g/year-1 on a global scale.

  11. Improved prediction of heat of mixing and segregation in metallic alloys using tunable mixing rule for embedded atom method

    NASA Astrophysics Data System (ADS)

    Divi, Srikanth; Agrahari, Gargi; Ranjan Kadulkar, Sanket; Kumar, Sanjeet; Chatterjee, Abhijit

    2017-12-01

    Capturing segregation behavior in metal alloy nanoparticles accurately using computer simulations is contingent upon the availability of high-fidelity interatomic potentials. The embedded atom method (EAM) potential is a widely trusted interatomic potential form used with pure metals and their alloys. When limited experimental data is available, the A-B EAM cross-interaction potential for metal alloys AxB 1-x are often constructed from pure metal A and B potentials by employing a pre-defined ‘mixing rule’ without any adjustable parameters. While this approach is convenient, we show that for AuPt, NiPt, AgAu, AgPd, AuNi, NiPd, PtPd and AuPd such mixing rules may not even yield the correct alloy properties, e.g., heats of mixing, that are closely related to the segregation behavior. A general theoretical formulation based on scaling invariance arguments is introduced that addresses this issue by tuning the mixing rule to better describe alloy properties. Starting with an existing pure metal EAM potential that is used extensively in literature, we find that the mixing rule fitted to heats of mixing for metal solutions usually provides good estimates of segregation energies, lattice parameters and cohesive energy, as well as equilibrium distribution of metals within a nanoparticle using Monte Carlo simulations. While the tunable mixing rule generally performs better than non-adjustable mixing rules, the use of the tunable mixing rule may still require some caution. For e.g., in Pt-Ni system we find that the segregation behavior can deviate from the experimentally observed one at Ni-rich compositions. Despite this the overall results suggest that the same approach may be useful for developing improved cross-potentials with other existing pure metal EAM potentials as well. As a further test of our approach, mixing rule estimated from binary data is used to calculate heat of mixing in AuPdPt, AuNiPd, AuPtNi, AgAuPd and NiPtPd. Excellent agreement with experiments is observed for AuPdPt.

  12. Regression analysis of mixed recurrent-event and panel-count data.

    PubMed

    Zhu, Liang; Tong, Xinwei; Sun, Jianguo; Chen, Manhua; Srivastava, Deo Kumar; Leisenring, Wendy; Robison, Leslie L

    2014-07-01

    In event history studies concerning recurrent events, two types of data have been extensively discussed. One is recurrent-event data (Cook and Lawless, 2007. The Analysis of Recurrent Event Data. New York: Springer), and the other is panel-count data (Zhao and others, 2010. Nonparametric inference based on panel-count data. Test 20: , 1-42). In the former case, all study subjects are monitored continuously; thus, complete information is available for the underlying recurrent-event processes of interest. In the latter case, study subjects are monitored periodically; thus, only incomplete information is available for the processes of interest. In reality, however, a third type of data could occur in which some study subjects are monitored continuously, but others are monitored periodically. When this occurs, we have mixed recurrent-event and panel-count data. This paper discusses regression analysis of such mixed data and presents two estimation procedures for the problem. One is a maximum likelihood estimation procedure, and the other is an estimating equation procedure. The asymptotic properties of both resulting estimators of regression parameters are established. Also, the methods are applied to a set of mixed recurrent-event and panel-count data that arose from a Childhood Cancer Survivor Study and motivated this investigation. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Estimating proportions in petrographic mixing equations by least-squares approximation.

    PubMed

    Bryan, W B; Finger, L W; Chayes, F

    1969-02-28

    Petrogenetic hypotheses involving fractional crystallization, assimilation, or mixing of magmas may be expressed and tested as problems in leastsquares approximation. The calculation uses all of the data and yields a unique solution for each model, thus avoiding the ambiguity inherent in graphical or trial-and-error procedures. The compositional change in the 1960 lavas of Kilauea Volcano, Hawaii, is used to illustrate the method of calculation.

  14. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  15. Evaluation of Quantitative Exposure Assessment Method for Nanomaterials in Mixed Dust Environments: Application in Tire Manufacturing Facilities.

    PubMed

    Kreider, Marisa L; Cyrs, William D; Tosiano, Melissa A; Panko, Julie M

    2015-11-01

    Current recommendations for nanomaterial-specific exposure assessment require adaptation in order to be applied to complicated manufacturing settings, where a variety of particle types may contribute to the potential exposure. The purpose of this work was to evaluate a method that would allow for exposure assessment of nanostructured materials by chemical composition and size in a mixed dust setting, using carbon black (CB) and amorphous silica (AS) from tire manufacturing as an example. This method combined air sampling with a low pressure cascade impactor with analysis of elemental composition by size to quantitatively assess potential exposures in the workplace. This method was first pilot-tested in one tire manufacturing facility; air samples were collected with a Dekati Low Pressure Impactor (DLPI) during mixing where either CB or AS were used as the primary filler. Air samples were analyzed via scanning transmission electron microscopy (STEM) coupled with energy dispersive spectroscopy (EDS) to identify what fraction of particles were CB, AS, or 'other'. From this pilot study, it was determined that ~95% of all nanoscale particles were identified as CB or AS. Subsequent samples were collected with the Dekati Electrical Low Pressure Impactor (ELPI) at two tire manufacturing facilities and analyzed using the same methodology to quantify exposure to these materials. This analysis confirmed that CB and AS were the predominant nanoscale particle types in the mixing area at both facilities. Air concentrations of CB and AS ranged from ~8900 to 77600 and 400 to 22200 particles cm(-3), respectively. This method offers the potential to provide quantitative estimates of worker exposure to nanoparticles of specific materials in a mixed dust environment. With pending development of occupational exposure limits for nanomaterials, this methodology will allow occupational health and safety practitioners to estimate worker exposures to specific materials, even in scenarios where many particle types are present. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  16. Advantage of population pharmacokinetic method for evaluating the bioequivalence and accuracy of parameter estimation of pidotimod.

    PubMed

    Huang, Jihan; Li, Mengying; Lv, Yinghua; Yang, Juan; Xu, Ling; Wang, Jingjing; Chen, Junchao; Wang, Kun; He, Yingchun; Zheng, Qingshan

    2016-09-01

    This study was aimed at exploring the accuracy of population pharmacokinetic method in evaluating the bioequivalence of pidotimod with sparse data profiles and whether this method is suitable for bioequivalence evaluation in special populations such as children with fewer samplings. Methods In this single-dose, two-period crossover study, 20 healthy male Chinese volunteers were randomized 1 : 1 to receive either the test or reference formulation, with a 1-week washout before receiving the alternative formulation. Noncompartmental and population compartmental pharmacokinetic analyses were conducted. Simulated data were analyzed to graphically evaluate the model and the pharmacokinetic characteristics of the two pidotimod formulations. Various sparse sampling scenarios were generated from the real bioequivalence clinical trial data and evaluated by population pharmacokinetic method. The 90% confidence intervals (CIs) for AUC0-12h, AUC0-∞, and Cmax were 97.3 - 118.7%, 96.9 - 118.7%, and 95.1 - 109.8%, respectively, within the 80 - 125% range for bioequivalence using noncompartmental analysis. The population compartmental pharmacokinetics of pidotimod were described using a one-compartment model with first-order absorption and lag time. In the comparison of estimations in different dataset, the estimation of random three- and< fixed four-point sampling strategies can provide results similar to those obtained through rich sampling. The nonlinear mixed-effects model requires fewer data points. Moreover, compared with the noncompartmental analysis method, the pharmacokinetic parameters can be more accurately estimated using nonlinear mixed-effects model. The population pharmacokinetic modeling method was used to assess the bioequivalence of two pidotimod formulations with relatively few sampling points and further validated the bioequivalence of the two formulations. This method may provide useful information for regulating bioequivalence evaluation in special populations.

  17. Illicit and nonmedical drug use among Asian Americans, Native Hawaiians/Pacific Islanders, and mixed-race individuals

    PubMed Central

    Wu, Li-Tzy; Blazer, Dan G.; Swartz, Marvin S.; Burchett, Bruce; Brady, Kathleen T.

    2013-01-01

    Background The racial/ethnic composition of the United States is shifting rapidly, with non-Hispanic Asian-Americans, Native Hawaiians/Pacific Islanders (NHs/PIs), and mixed-race individuals the fastest growing segments of the population. We determined new drug use estimates for these rising groups. Prevalences among Whites were included as a comparison. Methods Data were from the 2005–2011 National Surveys on Drug Use and Health. Substance use among respondents aged ≥12 years was assessed by computer-assisted self-interviewing methods. Respondents’ self-reported race/ethnicity, age, gender, household income, government assistance, county type, residential stability, major depressive episode, history of being arrested, tobacco use, and alcohol use were examined as correlates. We stratified the analysis by race/ethnicity and used logistic regression to estimate odds of drug use. Results Prevalence of past-year marijuana use among Whites increased from 10.7% in 2005 to 11.6–11.8% in 2009–2011 (P<0.05). There were no significant yearly changes in drug use prevalences among Asian-Americans, NHs/PIs, and mixed-race people; but use of any drug, especially marijuana, was prevalent among NHs/PIs and mixed-race people (21.2% and 23.3%, respectively, in 2011). Compared with Asian-Americans, NHs/PIs had higher odds of marijuana use, and mixed-race individuals had higher odds of using marijuana, cocaine, hallucinogens, stimulants, sedatives, and tranquilizers. Compared with Whites, mixed-race individuals had greater odds of any drug use, mainly marijuana, and NHs/PIs resembled Whites in odds of any drug use. Conclusions Findings reveal alarmingly prevalent drug use among NHs/PIs and mixed-race people. Research on drug use is needed in these rising populations to inform prevention and treatment efforts. PMID:23890491

  18. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Double-path acquisition of pulse wave transit time and heartbeat using self-mixing interferometry

    NASA Astrophysics Data System (ADS)

    Wei, Yingbin; Huang, Wencai; Wei, Zheng; Zhang, Jie; An, Tong; Wang, Xiulin; Xu, Huizhen

    2017-06-01

    We present a technique based on self-mixing interferometry for acquiring the pulse wave transit time (PWTT) and heartbeat. A signal processing method based on Continuous Wavelet Transform and Hilbert Transform is applied to extract potentially useful information in the self-mixing interference (SMI) signal, including PWTT and heartbeat. Then, some cardiovascular characteristics of the human body are easily acquired without retrieving the SMI signal by complicated algorithms. Experimentally, the PWTT is measured on the finger and the toe of the human body using double-path self-mixing interferometry. Experimental statistical data show the relation between the PWTT and blood pressure, which can be used to estimate the systolic pressure value by fitting. Moreover, the measured heartbeat shows good agreement with that obtained by a photoplethysmography sensor. The method that we demonstrate, which is based on self-mixing interferometry with significant advantages of simplicity, compactness and non-invasion, effectively illustrates the viability of the SMI technique for measuring other cardiovascular signals.

  20. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  2. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    NASA Astrophysics Data System (ADS)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  3. MIXED MODEL AND ESTIMATING EQUATION APPROACHES FOR ZERO INFLATION IN CLUSTERED BINARY RESPONSE DATA WITH APPLICATION TO A DATING VIOLENCE STUDY1

    PubMed Central

    Fulton, Kara A.; Liu, Danping; Haynie, Denise L.; Albert, Paul S.

    2016-01-01

    The NEXT Generation Health study investigates the dating violence of adolescents using a survey questionnaire. Each student is asked to affirm or deny multiple instances of violence in his/her dating relationship. There is, however, evidence suggesting that students not in a relationship responded to the survey, resulting in excessive zeros in the responses. This paper proposes likelihood-based and estimating equation approaches to analyze the zero-inflated clustered binary response data. We adopt a mixed model method to account for the cluster effect, and the model parameters are estimated using a maximum-likelihood (ML) approach that requires a Gaussian–Hermite quadrature (GHQ) approximation for implementation. Since an incorrect assumption on the random effects distribution may bias the results, we construct generalized estimating equations (GEE) that do not require the correct specification of within-cluster correlation. In a series of simulation studies, we examine the performance of ML and GEE methods in terms of their bias, efficiency and robustness. We illustrate the importance of properly accounting for this zero inflation by reanalyzing the NEXT data where this issue has previously been ignored. PMID:26937263

  4. Two methods for parameter estimation using multiple-trait models and beef cattle field data.

    PubMed

    Bertrand, J K; Kriese, L A

    1990-08-01

    Two methods are presented for estimating variances and covariances from beef cattle field data using multiple-trait sire models. Both methods require that the first trait have no missing records and that the contemporary groups for the second trait be subsets of the contemporary groups for the first trait; however, the second trait may have missing records. One method uses pseudo expectations involving quadratics composed of the solutions and the right-hand sides of the mixed model equations. The other method is an extension of Henderson's Simple Method to the multiple trait case. Neither of these methods requires any inversions of large matrices in the computation of the parameters; therefore, both methods can handle very large sets of data. Four simulated data sets were generated to evaluate the methods. In general, both methods estimated genetic correlations and heritabilities that were close to the Restricted Maximum Likelihood estimates and the true data set values, even when selection within contemporary groups was practiced. The estimates of residual correlations by both methods, however, were biased by selection. These two methods can be useful in estimating variances and covariances from multiple-trait models in large populations that have undergone a minimal amount of selection within contemporary groups.

  5. Reproducibility of the exponential rise technique of CO(2) rebreathing for measuring P(v)CO(2) and C(v)CO(2 )to non-invasively estimate cardiac output during incremental, maximal treadmill exercise.

    PubMed

    Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E

    2004-05-01

    The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.

  6. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  7. Mixed ethnicity and behavioural problems in the Millennium Cohort Study

    PubMed Central

    Zilanawala, Afshin; Sacker, Amanda; Kelly, Yvonne

    2018-01-01

    Background The population of mixed ethnicity individuals in the UK is growing. Despite this demographic trend, little is known about mixed ethnicity children and their problem behaviours. We examine trajectories of behavioural problems among non-mixed and mixed ethnicity children from early to middle childhood using nationally representative cohort data in the UK. Methods Data from 16 330 children from the Millennium Cohort Study with total difficulties scores were analysed. We estimated trajectories of behavioural problems by mixed ethnicity using growth curve models. Results White mixed (mean total difficulties score: 8.3), Indian mixed (7.7), Pakistani mixed (8.9) and Bangladeshi mixed (7.2) children had fewer problem behaviours than their non-mixed counterparts at age 3 (9.4, 10.1, 13.1 and 11.9, respectively). White mixed, Pakistani mixed and Bangladeshi mixed children had growth trajectories in problem behaviours significantly different from that of their non-mixed counterparts. Conclusions Using a detailed mixed ethnic classification revealed diverging trajectories between some non-mixed and mixed children across the early life course. Future studies should investigate the mechanisms, which may influence increasing behavioural problems in mixed ethnicity children. PMID:26912571

  8. Multilevel mixed effects parametric survival models using adaptive Gauss-Hermite quadrature with application to recurrent events and individual participant data meta-analysis.

    PubMed

    Crowther, Michael J; Look, Maxime P; Riley, Richard D

    2014-09-28

    Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Close-in characteristics of LH2/LOX reactions

    NASA Technical Reports Server (NTRS)

    Riehl, W. A.; Ullian, L. J.

    1985-01-01

    In deriving shock overpressures from space vehicles employing LH2 and LOX, separate methods of analyses and prediction are recommended, as a function of the distance. Three methods of treatment are recommended. For the Far Field - where the expected shock overpressure is less than 40 psi (lambda = 5) - use the classical PYRO approach to determine TNT yield, and employ classical ordnance (Kingery) curve to obtain the overall value. For the Close-In Range, a suggested limit is 3D, or a zone from a distance of three times the tank diameter to the tank wall. Rather than estimate a specific distance from the center of explosion to the target, it is only necessary to estimate whether this could be within one, two, or three diameters away from the wall; i.e., in the 1, 2, or 3D zone. Then assess whether mixing mode is by the PYRO CBGS (spill) mode or CBM (internal mixing) mode. From the zone and mixing mode, the probability of attaining various shock overpressures is determined from the plots provided herein. For the transition zone, between 40 psi and the 3D distance, it is tentatively recommended that both of the preceding methods be used, and to be conservative, the higher resulting value be used.

  10. Estimating Surface Soil Moisture in a Mixed-Landscape using SMAP and MODIS/VIIRS Data

    NASA Astrophysics Data System (ADS)

    Tang, J.; Di, L.; Xiao, J.

    2017-12-01

    Soil moisture, a critical parameter of earth ecosystem linking land surface and atmosphere, has been widely applied in many application (Di, 1991; Njoku et al. 2003; Western 2002; Zhao et al. 2014; McColl et al. 2017) from regional to continental or even global scale. The advent of satellite-based remote sensing, particular in the last two decades, has proven successful for mapping the surface soil moisture (SSM) from space (Petropoulos et al. 2015; Kim et al. 2015; Molero et al. 2016). The current soil moisture products, however, is not able to fully characterize the spatial and temporal variability of soil moisture at mixed landscape types (Albergel et al. 2013; Zeng et al. 2015). In this research, we derived the SSM at 1-km spatial resolution by using sensor observation and high-level products from SMAP and MODIS/VIIRS as well as metrorological, landcover, and soil data. Specifically, we proposed a practicable method to produce the originally planned SMAP L3_SM_A with comparable quality by downscaling the SMAP L3_SM_P product through a proved method, the geographically weighted regression method at mixed landscape in southern New Hampshire. This estimated SSM was validated using the Soil Climate Analysis Network (SCAN) from Natural Resources Conservation Service (NRCS) of United States Department of Agriculture (USDA).

  11. Microbubble cloud characterization by nonlinear frequency mixing.

    PubMed

    Cavaro, M; Payan, C; Moysan, J; Baqué, F

    2011-05-01

    In the frame of the fourth generation forum, France decided to develop sodium fast nuclear reactors. French Safety Authority requests the associated monitoring of argon gas into sodium. This implies to estimate the void fraction, and a histogram indicating the bubble population. In this context, the present letter studies the possibility of achieving an accurate determination of the histogram with acoustic methods. A nonlinear, two-frequency mixing technique has been implemented, and a specific optical device has been developed in order to validate the experimental results. The acoustically reconstructed histograms are in excellent agreement with those obtained using optical methods.

  12. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  13. Impact of Planetary Boundary Layer Depth on Climatological Tracer Transport in the GEOS-5 AGCM

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2013-12-01

    Planetary boundary layer (PBL) processes have large implications for tropospheric tracer transport since surface fluxes are diluted by the depth of the PBL through vertical mixing. However, no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to diagnose PBL depth and produce climatologies that are evaluated here. All seven methods evaluate a single atmosphere so differences are related solely to the definition chosen. PBL depths that are estimated using a Richardson number are shallower than those given by methods based on the scalar diffusivity during warm, moist conditions at midday and collapse to lower values at night. In GEOS-5, the PBL depth is used in the estimation of the turbulent length scale and so impacts vertical mixing. Changing the method used to determine the PBL depth for this length scale thus changes the tracer transport. Using a bulk Richardson number method instead of a scalar diffusivity method produces changes in the quantity of Saharan dust lofted into the free troposphere and advected to North America, with more surface dust in North America during boreal summer and less in boreal winter. Additionally, greenhouse gases are considerably impacted. During boreal winter, changing the PBL depth definition produces carbon dioxide differences of nearly 5 ppm over Siberia and gradients of about 5 ppm over 1000 km in Europe. PBL depth changes are responsible for surface carbon monoxide changes of 20 ppb or more over the biomass burning regions of Africa.

  14. Generating Health Estimates by Zip Code: A Semiparametric Small Area Estimation Approach Using the California Health Interview Survey.

    PubMed

    Wang, Yueyan; Ponce, Ninez A; Wang, Pan; Opsomer, Jean D; Yu, Hongjian

    2015-12-01

    We propose a method to meet challenges in generating health estimates for granular geographic areas in which the survey sample size is extremely small. Our generalized linear mixed model predicts health outcomes using both individual-level and neighborhood-level predictors. The model's feature of nonparametric smoothing function on neighborhood-level variables better captures the association between neighborhood environment and the outcome. Using 2011 to 2012 data from the California Health Interview Survey, we demonstrate an empirical application of this method to estimate the fraction of residents without health insurance for Zip Code Tabulation Areas (ZCTAs). Our method generated stable estimates of uninsurance for 1519 of 1765 ZCTAs (86%) in California. For some areas with great socioeconomic diversity across adjacent neighborhoods, such as Los Angeles County, the modeled uninsured estimates revealed much heterogeneity among geographically adjacent ZCTAs. The proposed method can increase the value of health surveys by providing modeled estimates for health data at a granular geographic level. It can account for variations in health outcomes at the neighborhood level as a result of both socioeconomic characteristics and geographic locations.

  15. Effects of aggregate angularity on mix design characteristics and pavement performance.

    DOT National Transportation Integrated Search

    2009-12-01

    This research targeted two primary purposes: to estimate current aggregate angularity test methods and to evaluate current : aggregate angularity requirements in the Nebraska asphalt mixture/pavement specification. To meet the first research : object...

  16. Evaluating the efficacy of DNA differential extraction methods for sexual assault evidence.

    PubMed

    Klein, Sonja B; Buoncristiani, Martin R

    2017-07-01

    Analysis of sexual assault evidence, often a mixture of spermatozoa and victim epithelial cells, represents a significant portion of a forensic DNA laboratory's case load. Successful genotyping of sperm DNA from these mixed cell samples, particularly with low amounts of sperm, depends on maximizing sperm DNA recovery and minimizing non-sperm DNA carryover. For evaluating the efficacy of the differential extraction, we present a method which uses a Separation Potential Ratio (SPRED) to consider both sperm DNA recovery and non-sperm DNA removal as variables for determining separation efficiency. In addition, we describe how the ratio of male-to-female DNA in the sperm fraction may be estimated by using the SPRED of the differential extraction method in conjunction with the estimated ratio of male-to-female DNA initially present on the mixed swab. This approach may be useful for evaluating or modifying differential extraction methods, as we demonstrate by comparing experimental results obtained from the traditional differential extraction and the Erase Sperm Isolation Kit (PTC © ) procedures. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  18. Estimating pathway-specific contributions to biodegradation in aquifers based on dual isotope analysis: theoretical analysis and reactive transport simulations.

    PubMed

    Centler, Florian; Heße, Falk; Thullner, Martin

    2013-09-01

    At field sites with varying redox conditions, different redox-specific microbial degradation pathways contribute to total contaminant degradation. The identification of pathway-specific contributions to total contaminant removal is of high practical relevance, yet difficult to achieve with current methods. Current stable-isotope-fractionation-based techniques focus on the identification of dominant biodegradation pathways under constant environmental conditions. We present an approach based on dual stable isotope data to estimate the individual contributions of two redox-specific pathways. We apply this approach to carbon and hydrogen isotope data obtained from reactive transport simulations of an organic contaminant plume in a two-dimensional aquifer cross section to test the applicability of the method. To take aspects typically encountered at field sites into account, additional simulations addressed the effects of transverse mixing, diffusion-induced stable-isotope fractionation, heterogeneities in the flow field, and mixing in sampling wells on isotope-based estimates for aerobic and anaerobic pathway contributions to total contaminant biodegradation. Results confirm the general applicability of the presented estimation method which is most accurate along the plume core and less accurate towards the fringe where flow paths receive contaminant mass and associated isotope signatures from the core by transverse dispersion. The presented method complements the stable-isotope-fractionation-based analysis toolbox. At field sites with varying redox conditions, it provides a means to identify the relative importance of individual, redox-specific degradation pathways. © 2013.

  19. Linear least squares approach for evaluating crack tip fracture parameters using isochromatic and isoclinic data from digital photoelasticity

    NASA Astrophysics Data System (ADS)

    Patil, Prataprao; Vyasarayani, C. P.; Ramji, M.

    2017-06-01

    In this work, digital photoelasticity technique is used to estimate the crack tip fracture parameters for different crack configurations. Conventionally, only isochromatic data surrounding the crack tip is used for SIF estimation, but with the advent of digital photoelasticity, pixel-wise availability of both isoclinic and isochromatic data could be exploited for SIF estimation in a novel way. A linear least square approach is proposed to estimate the mixed-mode crack tip fracture parameters by solving the multi-parameter stress field equation. The stress intensity factor (SIF) is extracted from those estimated fracture parameters. The isochromatic and isoclinic data around the crack tip is estimated using the ten-step phase shifting technique. To get the unwrapped data, the adaptive quality guided phase unwrapping algorithm (AQGPU) has been used. The mixed mode fracture parameters, especially SIF are estimated for specimen configurations like single edge notch (SEN), center crack and straight crack ahead of inclusion using the proposed algorithm. The experimental SIF values estimated using the proposed method are compared with analytical/finite element analysis (FEA) results, and are found to be in good agreement.

  20. Combined Recirculatory-compartmental Population Pharmacokinetic Modeling of Arterial and Venous Plasma S(+) and R(-) Ketamine Concentrations.

    PubMed

    Henthorn, Thomas K; Avram, Michael J; Dahan, Albert; Gustafsson, Lars L; Persson, Jan; Krejcie, Tom C; Olofsen, Erik

    2018-05-16

    The pharmacokinetics of infused drugs have been modeled without regard for recirculatory or mixing kinetics. We used a unique ketamine dataset with simultaneous arterial and venous blood sampling, during and after separate S(+) and R(-) ketamine infusions, to develop a simplified recirculatory model of arterial and venous plasma drug concentrations. S(+) or R(-) ketamine was infused over 30 min on two occasions to 10 healthy male volunteers. Frequent, simultaneous arterial and forearm venous blood samples were obtained for up to 11 h. A multicompartmental pharmacokinetic model with front-end arterial mixing and venous blood components was developed using nonlinear mixed effects analyses. A three-compartment base pharmacokinetic model with additional arterial mixing and arm venous compartments and with shared S(+)/R(-) distribution kinetics proved superior to standard compartmental modeling approaches. Total pharmacokinetic flow was estimated to be 7.59 ± 0.36 l/min (mean ± standard error of the estimate), and S(+) and R(-) elimination clearances were 1.23 ± 0.04 and 1.06 ± 0.03 l/min, respectively. The arm-tissue link rate constant was 0.18 ± 0.01 min and the fraction of arm blood flow estimated to exchange with arm tissue was 0.04 ± 0.01. Arterial drug concentrations measured during drug infusion have two kinetically distinct components: partially or lung-mixed drug and fully mixed-recirculated drug. Front-end kinetics suggest the partially mixed concentration is proportional to the ratio of infusion rate and total pharmacokinetic flow. This simplified modeling approach could lead to more generalizable models for target-controlled infusions and improved methods for analyzing pharmacokinetic-pharmacodynamic data.

  1. Bayesian estimation of self-similarity exponent

    NASA Astrophysics Data System (ADS)

    Makarava, Natallia; Benmehdi, Sabah; Holschneider, Matthias

    2011-08-01

    In this study we propose a Bayesian approach to the estimation of the Hurst exponent in terms of linear mixed models. Even for unevenly sampled signals and signals with gaps, our method is applicable. We test our method by using artificial fractional Brownian motion of different length and compare it with the detrended fluctuation analysis technique. The estimation of the Hurst exponent of a Rosenblatt process is shown as an example of an H-self-similar process with non-Gaussian dimensional distribution. Additionally, we perform an analysis with real data, the Dow-Jones Industrial Average closing values, and analyze its temporal variation of the Hurst exponent.

  2. A Mixed QM/MM Scoring Function to Predict Protein-Ligand Binding Affinity

    PubMed Central

    Hayik, Seth A.; Dunbrack, Roland; Merz, Kenneth M.

    2010-01-01

    Computational methods for predicting protein-ligand binding free energy continue to be popular as a potential cost-cutting method in the drug discovery process. However, accurate predictions are often difficult to make as estimates must be made for certain electronic and entropic terms in conventional force field based scoring functions. Mixed quantum mechanics/molecular mechanics (QM/MM) methods allow electronic effects for a small region of the protein to be calculated, treating the remaining atoms as a fixed charge background for the active site. Such a semi-empirical QM/MM scoring function has been implemented in AMBER using DivCon and tested on a set of 23 metalloprotein-ligand complexes, where QM/MM methods provide a particular advantage in the modeling of the metal ion. The binding affinity of this set of proteins can be calculated with an R2 of 0.64 and a standard deviation of 1.88 kcal/mol without fitting and 0.71 and a standard deviation of 1.69 kcal/mol with fitted weighting of the individual scoring terms. In this study we explore using various methods to calculate terms in the binding free energy equation, including entropy estimates and minimization standards. From these studies we found that using the rotational bond estimate to ligand entropy results in a reasonable R2 of 0.63 without fitting. We also found that using the ESCF energy of the proteins without minimization resulted in an R2 of 0.57, when using the rotatable bond entropy estimate. PMID:21221417

  3. Aliasing Signal Separation of Superimposed Abrasive Debris Based on Degenerate Unmixing Estimation Technique

    PubMed Central

    Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei

    2018-01-01

    Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system’s lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system’s ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection. PMID:29543733

  4. Spatial patterns of mixing in the Solomon Sea

    NASA Astrophysics Data System (ADS)

    Alberty, M. S.; Sprintall, J.; MacKinnon, J.; Ganachaud, A.; Cravatte, S.; Eldin, G.; Germineaud, C.; Melet, A.

    2017-05-01

    The Solomon Sea is a marginal sea in the southwest Pacific that connects subtropical and equatorial circulation, constricting transport of South Pacific Subtropical Mode Water and Antarctic Intermediate Water through its deep, narrow channels. Marginal sea topography inhibits internal waves from propagating out and into the open ocean, making these regions hot spots for energy dissipation and mixing. Data from two hydrographic cruises and from Argo profiles are employed to indirectly infer mixing from observations for the first time in the Solomon Sea. Thorpe and finescale methods indirectly estimate the rate of dissipation of kinetic energy (ɛ) and indicate that it is maximum in the surface and thermocline layers and decreases by 2-3 orders of magnitude by 2000 m depth. Estimates of diapycnal diffusivity from the observations and a simple diffusive model agree in magnitude but have different depth structures, likely reflecting the combined influence of both diapycnal mixing and isopycnal stirring. Spatial variability of ɛ is large, spanning at least 2 orders of magnitude within isopycnal layers. Seasonal variability of ɛ reflects regional monsoonal changes in large-scale oceanic and atmospheric conditions with ɛ increased in July and decreased in March. Finally, tide power input and topographic roughness are well correlated with mean spatial patterns of mixing within intermediate and deep isopycnals but are not clearly correlated with thermocline mixing patterns.

  5. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    NASA Astrophysics Data System (ADS)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  6. Calibration of d.b.h.-height equations for southern hardwoods

    Treesearch

    Thomas B. Lynch; A. Gordon Holley; Douglas J. Stevenson

    2006-01-01

    Data from southern hardwood stands in East Texas were used to estimate parameters for d.b.h.-height equations. Mixed model estimation methods were used, so that the stand from which a tree was sampled was considered a random effect. This makes it possible to calibrate these equations using data collected in a local stand of interest, by using d.b.h. and total height...

  7. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.

  8. Galactic cosmic ray transport methods and radiation quality issues

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.

    1992-01-01

    An overview of galactic cosmic ray (GCR) interaction and transport methods, as implemented in the Langley Research Center GCR transport code, is presented. Representative results for solar minimum, exo-magnetospheric GCR dose equivalents in water are presented on a component by component basis for various thicknesses of aluminum shielding. The impact of proposed changes to the currently used quality factors on exposure estimates and shielding requirements are quantified. Using the cellular track model of Katz, estimates of relative biological effectiveness (RBE) for the mixed GCR radiation fields are also made.

  9. Estimating site index from tree species composition in mixed stands of upland eastern hardwoods: Should shrubs be included?

    Treesearch

    W. Henry McNab

    2010-01-01

    Site index is the most widely used method for site quality assessment in hardwood forests of the eastern United States. Its application in most oak (Quercus sp. L.) dominated stands is often problematic, however, because available sample trees usually do not meet important underlying assumptions of the method. A prototype method for predicting site index from tree...

  10. Magnitude and sources of bias in the detection of mixed strain M. tuberculosis infection.

    PubMed

    Plazzotta, Giacomo; Cohen, Ted; Colijn, Caroline

    2015-03-07

    High resolution tests for genetic variation reveal that individuals may simultaneously host more than one distinct strain of Mycobacterium tuberculosis. Previous studies find that this phenomenon, which we will refer to as "mixed infection", may affect the outcomes of treatment for infected individuals and may influence the impact of population-level interventions against tuberculosis. In areas where the incidence of TB is high, mixed infections have been found in nearly 20% of patients; these studies may underestimate the actual prevalence of mixed infection given that tests may not be sufficiently sensitive for detecting minority strains. Specific reasons for failing to detect mixed infections would include low initial numbers of minority strain cells in sputum, stochastic growth in culture and the physical division of initial samples into parts (typically only one of which is genotyped). In this paper, we develop a mathematical framework that models the study designs aimed to detect mixed infections. Using both a deterministic and a stochastic approach, we obtain posterior estimates of the prevalence of mixed infection. We find that the posterior estimate of the prevalence of mixed infection may be substantially higher than the fraction of cases in which it is detected. We characterize this bias in terms of the sensitivity of the genotyping method and the relative growth rates and initial population sizes of the different strains collected in sputum. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Non-structural carbohydrates in woody plants compared among laboratories.

    PubMed

    Quentin, Audrey G; Pinkard, Elizabeth A; Ryan, Michael G; Tissue, David T; Baggett, L Scott; Adams, Henry D; Maillard, Pascale; Marchand, Jacqueline; Landhäusser, Simon M; Lacointe, André; Gibon, Yves; Anderegg, William R L; Asao, Shinichi; Atkin, Owen K; Bonhomme, Marc; Claye, Caroline; Chow, Pak S; Clément-Vidal, Anne; Davies, Noel W; Dickman, L Turin; Dumbur, Rita; Ellsworth, David S; Falk, Kristen; Galiano, Lucía; Grünzweig, José M; Hartmann, Henrik; Hoch, Günter; Hood, Sharon; Jones, Joanna E; Koike, Takayoshi; Kuhlmann, Iris; Lloret, Francisco; Maestro, Melchor; Mansfield, Shawn D; Martínez-Vilalta, Jordi; Maucourt, Mickael; McDowell, Nathan G; Moing, Annick; Muller, Bertrand; Nebauer, Sergio G; Niinemets, Ülo; Palacio, Sara; Piper, Frida; Raveh, Eran; Richter, Andreas; Rolland, Gaëlle; Rosas, Teresa; Saint Joanis, Brigitte; Sala, Anna; Smith, Renee A; Sterck, Frank; Stinziano, Joseph R; Tobias, Mari; Unda, Faride; Watanabe, Makoto; Way, Danielle A; Weerasinghe, Lasantha K; Wild, Birgit; Wiley, Erin; Woodruff, David R

    2015-11-01

    Non-structural carbohydrates (NSC) in plant tissue are frequently quantified to make inferences about plant responses to environmental conditions. Laboratories publishing estimates of NSC of woody plants use many different methods to evaluate NSC. We asked whether NSC estimates in the recent literature could be quantitatively compared among studies. We also asked whether any differences among laboratories were related to the extraction and quantification methods used to determine starch and sugar concentrations. These questions were addressed by sending sub-samples collected from five woody plant tissues, which varied in NSC content and chemical composition, to 29 laboratories. Each laboratory analyzed the samples with their laboratory-specific protocols, based on recent publications, to determine concentrations of soluble sugars, starch and their sum, total NSC. Laboratory estimates differed substantially for all samples. For example, estimates for Eucalyptus globulus leaves (EGL) varied from 23 to 116 (mean = 56) mg g(-1) for soluble sugars, 6-533 (mean = 94) mg g(-1) for starch and 53-649 (mean = 153) mg g(-1) for total NSC. Mixed model analysis of variance showed that much of the variability among laboratories was unrelated to the categories we used for extraction and quantification methods (method category R(2) = 0.05-0.12 for soluble sugars, 0.10-0.33 for starch and 0.01-0.09 for total NSC). For EGL, the difference between the highest and lowest least squares means for categories in the mixed model analysis was 33 mg g(-1) for total NSC, compared with the range of laboratory estimates of 596 mg g(-1). Laboratories were reasonably consistent in their ranks of estimates among tissues for starch (r = 0.41-0.91), but less so for total NSC (r = 0.45-0.84) and soluble sugars (r = 0.11-0.83). Our results show that NSC estimates for woody plant tissues cannot be compared among laboratories. The relative changes in NSC between treatments measured within a laboratory may be comparable within and between laboratories, especially for starch. To obtain comparable NSC estimates, we suggest that users can either adopt the reference method given in this publication, or report estimates for a portion of samples using the reference method, and report estimates for a standard reference material. Researchers interested in NSC estimates should work to identify and adopt standard methods. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Small area estimation of proportions with different levels of auxiliary data.

    PubMed

    Chandra, Hukum; Kumar, Sushil; Aditya, Kaustav

    2018-03-01

    Binary data are often of interest in many small areas of applications. The use of standard small area estimation methods based on linear mixed models becomes problematic for such data. An empirical plug-in predictor (EPP) under a unit-level generalized linear mixed model with logit link function is often used for the estimation of a small area proportion. However, this EPP requires the availability of unit-level population information for auxiliary data that may not be always accessible. As a consequence, in many practical situations, this EPP approach cannot be applied. Based on the level of auxiliary information available, different small area predictors for estimation of proportions are proposed. Analytic and bootstrap approaches to estimating the mean squared error of the proposed small area predictors are also developed. Monte Carlo simulations based on both simulated and real data show that the proposed small area predictors work well for generating the small area estimates of proportions and represent a practical alternative to the above approach. The developed predictor is applied to generate estimates of the proportions of indebted farm households at district-level using debt investment survey data from India. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data

    PubMed Central

    Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-01-01

    Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741

  14. Can we improve top-down GHG inverse methods through informed prior and better representations of atmospheric transport? Insights from the Atmospheric Carbon and Transport (ACT) - America Aircraft Mission

    NASA Astrophysics Data System (ADS)

    Feng, S.; Lauvaux, T.; Keller, K.; Davis, K. J.

    2016-12-01

    Current estimates of biogenic carbon fluxes over North America based on top-down atmospheric inversions are subject to considerable uncertainty. This uncertainty stems to a large part from the uncertain prior fluxes estimates with the associated error covariances and approximations in the atmospheric transport models that link observed carbon dioxide mixing ratios with surface fluxes. Specifically, approximations in the representation of vertical mixing associated with atmospheric turbulence or convective transport and largely under-determined prior fluxes and their error structures significantly hamper our capacity to reliably estimate regional carbon fluxes. The Atmospheric Carbon and Transport - America (ACT-America) mission aims at reducing the uncertainties in inverse fluxes at the regional-scale by deploying airborne and ground-based platforms to characterize atmospheric GHG mixing ratios and the concurrent atmospheric dynamics. Two aircraft measure the 3-dimensional distribution of greenhouse gases at synoptic scales, focusing on the atmospheric boundary layer and the free troposphere during both fair and stormy weather conditions. Here we analyze two main questions: (i) What level of information can we expect from the currently planned observations? (ii) How might ACT-America reduce the hindcast and predictive uncertainty of carbon estimates over North America?

  15. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  16. The estimation of branching curves in the presence of subject-specific random effects.

    PubMed

    Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng

    2014-12-20

    Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.

  17. Additive mixed effect model for recurrent gap time data.

    PubMed

    Ding, Jieli; Sun, Liuquan

    2017-04-01

    Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.

  18. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    NASA Astrophysics Data System (ADS)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.

  19. Comparison of CTT and Rasch-based approaches for the analysis of longitudinal Patient Reported Outcomes.

    PubMed

    Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique

    2011-04-15

    Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.

  20. BACOM2.0 facilitates absolute normalization and quantification of somatic copy number alterations in heterogeneous tumor

    NASA Astrophysics Data System (ADS)

    Fu, Yi; Yu, Guoqiang; Levine, Douglas A.; Wang, Niya; Shih, Ie-Ming; Zhang, Zhen; Clarke, Robert; Wang, Yue

    2015-09-01

    Most published copy number datasets on solid tumors were obtained from specimens comprised of mixed cell populations, for which the varying tumor-stroma proportions are unknown or unreported. The inability to correct for signal mixing represents a major limitation on the use of these datasets for subsequent analyses, such as discerning deletion types or detecting driver aberrations. We describe the BACOM2.0 method with enhanced accuracy and functionality to normalize copy number signals, detect deletion types, estimate tumor purity, quantify true copy numbers, and calculate average-ploidy value. While BACOM has been validated and used with promising results, subsequent BACOM analysis of the TCGA ovarian cancer dataset found that the estimated average tumor purity was lower than expected. In this report, we first show that this lowered estimate of tumor purity is the combined result of imprecise signal normalization and parameter estimation. Then, we describe effective allele-specific absolute normalization and quantification methods that can enhance BACOM applications in many biological contexts while in the presence of various confounders. Finally, we discuss the advantages of BACOM in relation to alternative approaches. Here we detail this revised computational approach, BACOM2.0, and validate its performance in real and simulated datasets.

  1. Identifying Pleiotropic Genes in Genome-Wide Association Studies for Multivariate Phenotypes with Mixed Measurement Scales

    PubMed Central

    Williams, L. Keoki; Buu, Anne

    2017-01-01

    We propose a multivariate genome-wide association test for mixed continuous, binary, and ordinal phenotypes. A latent response model is used to estimate the correlation between phenotypes with different measurement scales so that the empirical distribution of the Fisher’s combination statistic under the null hypothesis is estimated efficiently. The simulation study shows that our proposed correlation estimation methods have high levels of accuracy. More importantly, our approach conservatively estimates the variance of the test statistic so that the type I error rate is controlled. The simulation also shows that the proposed test maintains the power at the level very close to that of the ideal analysis based on known latent phenotypes while controlling the type I error. In contrast, conventional approaches–dichotomizing all observed phenotypes or treating them as continuous variables–could either reduce the power or employ a linear regression model unfit for the data. Furthermore, the statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that conducting a multivariate test on multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests. The proposed method also offers a new approach to analyzing the Fagerström Test for Nicotine Dependence as multivariate phenotypes in genome-wide association studies. PMID:28081206

  2. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  3. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  4. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  5. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  6. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  7. SVM-Based Spectral Analysis for Heart Rate from Multi-Channel WPPG Sensor Signals.

    PubMed

    Xiong, Jiping; Cai, Lisang; Wang, Fei; He, Xiaowei

    2017-03-03

    Although wrist-type photoplethysmographic (hereafter referred to as WPPG) sensor signals can measure heart rate quite conveniently, the subjects' hand movements can cause strong motion artifacts, and then the motion artifacts will heavily contaminate WPPG signals. Hence, it is challenging for us to accurately estimate heart rate from WPPG signals during intense physical activities. The WWPG method has attracted more attention thanks to the popularity of wrist-worn wearable devices. In this paper, a mixed approach called Mix-SVM is proposed, it can use multi-channel WPPG sensor signals and simultaneous acceleration signals to measurement heart rate. Firstly, we combine the principle component analysis and adaptive filter to remove a part of the motion artifacts. Due to the strong relativity between motion artifacts and acceleration signals, the further denoising problem is regarded as a sparse signals reconstruction problem. Then, we use a spectrum subtraction method to eliminate motion artifacts effectively. Finally, the spectral peak corresponding to heart rate is sought by an SVM-based spectral analysis method. Through the public PPG database in the 2015 IEEE Signal Processing Cup, we acquire the experimental results, i.e., the average absolute error was 1.01 beat per minute, and the Pearson correlation was 0.9972. These results also confirm that the proposed Mix-SVM approach has potential for multi-channel WPPG-based heart rate estimation in the presence of intense physical exercise.

  8. Recollection, not familiarity, decreases in healthy aging: Converging evidence from four estimation methods

    PubMed Central

    Koen, Joshua D.; Yonelinas, Andrew P.

    2014-01-01

    Although it is generally accepted that aging is associated with recollection impairments, there is considerable disagreement surrounding how healthy aging influences familiarity-based recognition. One factor that might contribute to the mixed findings regarding age differences in familiarity is the estimation method used to quantify the two mnemonic processes. Here, this issue is examined by having a group of older adults (N = 39) between 40 and 81 years of age complete Remember/Know (RK), receiver operating characteristic (ROC), and process dissociation (PD) recognition tests. Estimates of recollection, but not familiarity, showed a significant negative correlation with chronological age. Inconsistent with previous findings, the estimation method did not moderate the relationship between age and estimations of recollection and familiarity. In a final analysis, recollection and familiarity were estimated as latent factors in a confirmatory factor analysis (CFA) that modeled the covariance between measures of free recall and recognition, and the results converged with the results from the RK, PD, and ROC tasks. These results are consistent with the hypothesis that episodic memory declines in older adults are primary driven by recollection deficits, and also suggest that the estimation method plays little to no role in age-related decreases in familiarity. PMID:25485974

  9. Modeling reactive transport with particle tracking and kernel estimators

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  10. A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation

    NASA Technical Reports Server (NTRS)

    Lee, Chang-Ock

    1996-01-01

    A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.

  11. Estimation of evaporation from equilibrium diurnal boundary layer humidity

    NASA Astrophysics Data System (ADS)

    Salvucci, G.; Rigden, A. J.; Li, D.; Gentine, P.

    2017-12-01

    Simplified conceptual models of the convective boundary layer as a well mixed profile of potential temperature (theta) and specific humidity (q) impinging on an initially stably stratified linear potential temperature profile have a long history in atmospheric sciences. These one dimensional representations of complex mixing are useful for gaining insights into land-atmosphere interactions and for prediction when state of the art LES approaches are infeasible. As previously shown (e.g. Betts), if one neglects the role of q in bouyancy, the framework yields a unique relation between mixed layer Theta, mixed layer height (h), and cumulative sensible heat flux (SH) throughout the day. Similarly assuming an initially q profile yields a simple relation between q, h, and cumulative latent heat flux (LH). The diurnal dynamics of theta and q are strongly dependent on SH and the initial lapse rates of theta (gamma_thet) and q (gamma q). In the estimation method proposed here, we further constrain these relations with two more assumptions: 1) The specific humidity is the same at the start of the period of boundary layer growth and at the collapse; and 2) Once the mixed layer reaches the LCL, further drying occurs proportionally to the deardorff convective velocity scale (omega) multiplied by q. Assumption (1) is based on the idea that below the cloud layer, there are no sinks of moisture within the mixed layer (neglecting lateral humidity divergence). Thus the net mixing of dry air aloft with evaporation from the surface must balance. Inclusion of the simple model of moisture loss above the LCL into the bulk-CBL model allows definition of an equilibrium humidity (q) condition at which the diurnal cycle of q repeats (i.e. additions of q from surface balance entrainment of dry air from above). Surprisingly, this framework allows estimation of LH from q, theta, and estimated net radiation by solving for the value of Evaporative Fraction (EF) for which the diurnal cycle of q repeats. Three parameters need specification: cloud area fraction, entrainment factor, and morning lapse rate. Surprisingly, a single set of values for these parameters are adequate to estimate EF at over 70 tested Ameriflux sites to within about 20%, though improvements are gained using a single regression model for gamma_thet that has been fitted to radiosonde data.

  12. Numerical Investigations of Wave-Induced Mixing in Upper Ocean Layer

    NASA Astrophysics Data System (ADS)

    Guan, Changlong

    2017-04-01

    The upper ocean layer is playing an important role in ocean-atmosphere interaction. The typical characteristics depicting the upper ocean layer are the sea surface temperature (SST) and the mixed layer depth (MLD). So far, the existing ocean models tend to over-estimate SST and to under-estimate MLD, due to the inadequate mixing in the mixing layer, which is owing to that several processes related mixing in physics are ignored in these ocean models. The mixing induced by surface gravity wave is expected to be able to enhance the mixing in the upper ocean layer, and therefore the over-estimation of SST and the under-estimate of MLD could be improved by including wave-induced mixing. The wave-induced mixing could be accomplished by the physical mechanisms, such as wave breaking (WB), wave-induced Reynolds stress (WR), and wave-turbulence interaction (WT). The General Ocean Turbulence Model (GOTM) is employed to investigate the effects of the three mechanisms concerning wave-induced mixing. The numerical investigation is carried out for three turbulence closure schemes, say, k-epsilon, k-omega and Mellor-Yamada (1982), with the observational data from OSC Papa station and wave data from ECMWF. The mixing enhancement by various waved-induced mixing mechanisms is investigated and verified.

  13. Testing Extensions of Our Quantitative Daily of San Joaquin Wintertime Aerosols Using MAIAC and Meteorology Without Transport/Transformation Assumptions

    NASA Technical Reports Server (NTRS)

    Chatfield, Robert B.; Sorek Hamer, Meytar; Esswein, Robert F.

    2017-01-01

    The Western US and many regions globally present daunting difficulties in understanding and mapping PM2.5 episodes. We evaluate extensions of a method independent of source-description and transport/transformation. These regions suffer frequent few-day episodes due to shallow mixing; low satellite AOT and bright surfaces complicate the description. Nevertheless, we expect residual errors in our maps of less than 8 ug/m^3 in episodes reaching 60-100 ug/m^3; maps which detail pollution from Interstate 5. Our current success is due to use of physically meaningful functions of MODIS-MAIAC-derived AOD, afternoon mixed-layer height, and relative humidity for a basin in which the latter are correlated. A mixed-effects model then describes a daily AOT-to-PM2.5 relationship. (Note: in other published mixed-effects models, AOT contributes minimally. We seek to extend on these to develop useful estimation methods for similar situations. We evaluate existing but more spotty information on size distribution (AERONET, MISR, MAIA, CALIPSO, other remote sensing). We also describe the usefulness of an equivalent mixing depth for water vapor vs meteorological boundary layer height. Each has virtues and limitations. Finally, we begin to evaluate methods for removing the complications due to detached but polluted layers (which don't mix to the surface) using geographical, meteorological, and remotely sensed data.

  14. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  15. An efficient genome-wide association test for mixed binary and continuous phenotypes with applications to substance abuse research.

    PubMed

    Buu, Anne; Williams, L Keoki; Yang, James J

    2018-03-01

    We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher's combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.

  16. The arbitrary order mixed mimetic finite difference method for the diffusion equation

    DOE PAGES

    Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco

    2016-05-01

    Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less

  17. Multiple resolution chirp reflectometry for fault localization and diagnosis in a high voltage cable in automotive electronics

    NASA Astrophysics Data System (ADS)

    Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae

    2016-12-01

    A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.

  18. Radial mixing in turbomachines

    NASA Astrophysics Data System (ADS)

    Segaert, P.; Hirsch, Ch.; Deruyck, J.

    1991-03-01

    A method for computing the effects of radial mixing in a turbomachinery blade row has been developed. The method fits in the framework of a quasi-3D flow computation and hence is applied in a corrective fashion to through flow distributions. The method takes into account both secondary flows and turbulent diffusion as possible sources of mixing. Secondary flow velocities determine the magnitude of the convection terms in the energy redistribution equation while a turbulent diffusion coefficient determines the magnitude of the diffusion terms. Secondary flows are computed by solving a Poisson equation for a secondary streamfunction on a transversal S3-plane, whereby the right-hand side axial vorticity is composed of different contributions, each associated to a particular flow region: inviscid core flow, end-wall boundary layers, profile boundary layers and wakes. The turbulent mixing coefficient is estimated by a semi-empirical correlation. Secondary flow theory is applied to the VUB cascade testcase and comparisons are made between the computational results and the extensive experimental data available for this testcase. This comparison shows that the secondary flow computations yield reliable predictions of the secondary flow pattern, both qualitatively and quantitatively, taking into account the limitations of the model. However, the computations show that use of a uniform mixing coefficient has to be replaced by a more sophisticated approach.

  19. An attempt to estimate isotropic and anisotropic lateral structure of the Earth by spectral inversion incorporating mixed coupling

    NASA Astrophysics Data System (ADS)

    Oda, Hitoshi

    2005-02-01

    We present a way to calculate free oscillation spectra for an aspherical earth model, which is constructed by adding isotropic and anisotropic velocity perturbations to the seismic velocity parameters of a reference earth model, and examine the effect of the velocity perturbations on the free oscillation spectrum. Lateral variations of the velocity perturbations are parametrized as an expansion in generalized spherical harmonics. We assume weak hexagonal anisotropy for the seismic wave anisotropy in the upper mantle, where the hexagonal symmetry axes are horizontally distributed. The synthetic spectra show that the velocity perturbations cause not only strong self-coupling among singlets of a multiplet but also mixed coupling between toroidal and spheroidal multiplets. Both the couplings give rise to an amplitude anomaly on the vertical component spectrum. In this study, we identify the amplitude anomaly resulting from the mixed coupling as quasi-toroidal mode. Excitation of the quasi-toroidal mode by a vertical strike-slip fault is largest on nodal lines of the Rayleigh wave, decreases with increasing azimuth angle and becomes smallest on loop lines. This azimuthal dependence of the spectral amplitude is quite similar to the Love wave radiation pattern. In addition, the amplitude spectrum of the quasi-toroidal mode is more sensitive to the anisotropic velocity perturbation than to the isotropic velocity perturbation. This means that the mode spectrum allowing for the mixed-coupling effect may provide constraints on the anisotropic lateral structure as well as the isotropic lateral structure. An inversion method, called mixed-coupling spectral inversion, is devised to retrieve the isotropic and anisotropic velocity perturbations from the free oscillation spectra incorporating the quasi-toroidal mode. We confirm that the spectral inversion method correctly recovers the isotropic and anisotropic lateral structure. Moreover introducing the mixed-coupling effect in the spectral inversion makes it possible to estimate the odd-order lateral structure, which cannot be determined by the conventional spectral inversion, which takes no account of the mixed coupling. Higher order structure is biased by the mixed coupling when the conventional spectral inversion is applied to the amplitude spectra incorporating the mixed coupling.

  20. Prediction of Therapy Tumor-Absorbed Dose Estimates in I-131 Radioimmunotherapy Using Tracer Data Via a Mixed-Model Fit to Time Activity

    PubMed Central

    Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.

    2012-01-01

    Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; p<0.0001) and correlation between tracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; p<0.0001) were very high. The predicted and delivered absorbed doses were within±25% (or within±75 cGy) for 80% of tumors. Conclusions The mixed-model approach is feasible for fitting tumor time-activity data in RIT treatment planning when individual least-squares fitting is not possible due to inadequate sampling points. The good correlation between predicted and delivered tumor doses demonstrates the potential of using a pretherapy tracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086

  1. Formation of parametric images using mixed-effects models: a feasibility study.

    PubMed

    Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh

    2016-03-01

    Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    NASA Astrophysics Data System (ADS)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  3. A method for estimating cost savings for population health management programs.

    PubMed

    Murphy, Shannon M E; McGready, John; Griswold, Michael E; Sylvia, Martha L

    2013-04-01

    To develop a quasi-experimental method for estimating Population Health Management (PHM) program savings that mitigates common sources of confounding, supports regular updates for continued program monitoring, and estimates model precision. Administrative, program, and claims records from January 2005 through June 2009. Data are aggregated by member and month. Study participants include chronically ill adult commercial health plan members. The intervention group consists of members currently enrolled in PHM, stratified by intensity level. Comparison groups include (1) members never enrolled, and (2) PHM participants not currently enrolled. Mixed model smoothing is employed to regress monthly medical costs on time (in months), a history of PHM enrollment, and monthly program enrollment by intensity level. Comparison group trends are used to estimate expected costs for intervention members. Savings are realized when PHM participants' costs are lower than expected. This method mitigates many of the limitations faced using traditional pre-post models for estimating PHM savings in an observational setting, supports replication for ongoing monitoring, and performs basic statistical inference. This method provides payers with a confident basis for making investment decisions. © Health Research and Educational Trust.

  4. Binomial outcomes in dataset with some clusters of size two: can the dependence of twins be accounted for? A simulation study comparing the reliability of statistical methods based on a dataset of preterm infants.

    PubMed

    Sauzet, Odile; Peacock, Janet L

    2017-07-20

    The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.

  5. Modelling mixing within the dead space of the lung improves predictions of functional residual capacity.

    PubMed

    Harrison, Chris D; Phan, Phi Anh; Zhang, Cathy; Geer, Daniel; Farmery, Andrew D; Payne, Stephen J

    2017-08-01

    Routine estimation of functional residual capacity (FRC) in ventilated patients has been a long held goal, with many methods previously proposed, but none have been used in routine clinical practice. This paper proposes three models for determining FRC using the nitrous oxide concentration from the entire expired breath in order to improve the precision of the estimate. Of the three models proposed, a dead space with two mixing compartments provided the best results, reducing the mean limits of agreement with the FRC measured by whole body plethysmography by up to 41%. This moves away from traditional lung models, which do not account for mixing within the dead space. Compared to literature values for FRC, the results are similar to those obtained using helium dilution and better than the LUFU device (Dräger Medical, Lubeck, Germany), with significantly better limits of agreement compared to plethysmography. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Estimates of advection and diffusion in the Potomac estuary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, A.J.

    1976-01-01

    A two-layered dispersion model, suitable for application to partially-mixed estuaries, has been developed to provide hydrological interpretation of the results of biological sampling. The model includes horizontal and vertical advection plus both horizontal and vertical diffusion. A pseudo-geostrophic method, which includes a damping factor to account for internal eddy friction, is used to estimate the horizontal advective fluxes and the results are compared with field observations. A salt balance model is then used to estimate the effective diffusivities in the Potomac estuary during the Spring of 1974.

  7. Mixing Efficiency in the Ocean.

    PubMed

    Gregg, M C; D'Asaro, E A; Riley, J J; Kunze, E

    2018-01-03

    Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.

  8. Mixing Efficiency in the Ocean

    NASA Astrophysics Data System (ADS)

    Gregg, M. C.; D'Asaro, E. A.; Riley, J. J.; Kunze, E.

    2018-01-01

    Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.

  9. Cross-shift changes in FEV1 in relation to wood dust exposure: the implications of different exposure assessment methods

    PubMed Central

    Schlunssen, V; Sigsgaard, T; Schaumburg, I; Kromhout, H

    2004-01-01

    Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest. Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers. Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored. Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast. Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models. PMID:15377768

  10. On the performance of surface renewal analysis to estimate sensible heat flux over two growing rice fields under the influence of regional advection

    NASA Astrophysics Data System (ADS)

    Castellví, F.; Snyder, R. L.

    2009-09-01

    SummaryHigh-frequency temperature data were recorded at one height and they were used in Surface Renewal (SR) analysis to estimate sensible heat flux during the full growing season of two rice fields located north-northeast of Colusa, CA (in the Sacramento Valley). One of the fields was seeded into a flooded paddy and the other was drill seeded before flooding. To minimize fetch requirements, the measurement height was selected to be close to the maximum expected canopy height. The roughness sub-layer depth was estimated to discriminate if the temperature data came from the inertial or roughness sub-layer. The equation to estimate the roughness sub-layer depth was derived by combining simple mixing-length theory, mixing-layer analogy, equations to account for stable atmospheric surface layer conditions, and semi-empirical canopy-architecture relationships. The potential for SR analysis as a method that operates in the full surface boundary layer was tested using data collected over growing vegetation at a site influenced by regional advection of sensible heat flux. The inputs used to estimate the sensible heat fluxes included air temperature sampled at 10 Hz, the mean and variance of the horizontal wind speed, the canopy height, and the plant area index for a given intermediate height of the canopy. Regardless of the stability conditions and measurement height above the canopy, sensible heat flux estimates using SR analysis gave results that were similar to those measured with the eddy covariance method. Under unstable cases, it was shown that the performance was sensitive to estimation of the roughness sub-layer depth. However, an expression was provided to select the crucial scale required for its estimation.

  11. Altitude training and haemoglobin mass from the optimised carbon monoxide rebreathing method determined by a meta-analysis

    PubMed Central

    Gore, Christopher J; Sharpe, Ken; Garvican-Lewis, Laura A; Saunders, Philo U; Humberstone, Clare E; Robertson, Eileen Y; Wachsmuth, Nadine B; Clark, Sally A; McLean, Blake D; Friedmann-Bette, Birgit; Neya, Mitsuo; Pottgiesser, Torben; Schumacher, Yorck O; Schmidt, Walter F

    2013-01-01

    Objective To characterise the time course of changes in haemoglobin mass (Hbmass) in response to altitude exposure. Methods This meta-analysis uses raw data from 17 studies that used carbon monoxide rebreathing to determine Hbmass prealtitude, during altitude and postaltitude. Seven studies were classic altitude training, eight were live high train low (LHTL) and two mixed classic and LHTL. Separate linear-mixed models were fitted to the data from the 17 studies and the resultant estimates of the effects of altitude used in a random effects meta-analysis to obtain an overall estimate of the effect of altitude, with separate analyses during altitude and postaltitude. In addition, within-subject differences from the prealtitude phase for altitude participant and all the data on control participants were used to estimate the analytical SD. The ‘true’ between-subject response to altitude was estimated from the within-subject differences on altitude participants, between the prealtitude and during-altitude phases, together with the estimated analytical SD. Results During-altitude Hbmass was estimated to increase by ∼1.1%/100 h for LHTL and classic altitude. Postaltitude Hbmass was estimated to be 3.3% higher than prealtitude values for up to 20 days. The within-subject SD was constant at ∼2% for up to 7 days between observations, indicative of analytical error. A 95% prediction interval for the ‘true’ response of an athlete exposed to 300 h of altitude was estimated to be 1.1–6%. Conclusions Camps as short as 2 weeks of classic and LHTL altitude will quite likely increase Hbmass and most athletes can expect benefit. PMID:24282204

  12. Estimating cropland NPP using national crop inventory and MODIS derived crop specific parameters

    NASA Astrophysics Data System (ADS)

    Bandaru, V.; West, T. O.; Ricciuto, D. M.

    2011-12-01

    Estimates of cropland net primary production (NPP) are needed as input for estimates of carbon flux and carbon stock changes. Cropland NPP is currently estimated using terrestrial ecosystem models, satellite remote sensing, or inventory data. All three of these methods have benefits and problems. Terrestrial ecosystem models are often better suited for prognostic estimates rather than diagnostic estimates. Satellite-based NPP estimates often underestimate productivity on intensely managed croplands and are also limited to a few broad crop categories. Inventory-based estimates are consistent with nationally collected data on crop yields, but they lack sub-county spatial resolution. Integrating these methods will allow for spatial resolution consistent with current land cover and land use, while also maintaining total biomass quantities recorded in national inventory data. The main objective of this study was to improve cropland NPP estimates by using a modification of the CASA NPP model with individual crop biophysical parameters partly derived from inventory data and MODIS 8day 250m EVI product. The study was conducted for corn and soybean crops in Iowa and Illinois for years 2006 and 2007. We used EVI as a linear function for fPAR, and used crop land cover data (56m spatial resolution) to extract individual crop EVI pixels. First, we separated mixed pixels of both corn and soybean that occur when MODIS 250m pixel contains more than one crop. Second, we substituted mixed EVI pixels with nearest pure pixel values of the same crop within 1km radius. To get more accurate photosynthetic active radiation (PAR), we applied the Mountain Climate Simulator (MTCLIM) algorithm with the use of temperature and precipitation data from the North American Land Data Assimilation System (NLDAS-2) to generate shortwave radiation data. Finally, county specific light use efficiency (LUE) values of each crop for years 2006 to 2007 were determined by application of mean county inventory NPP and EVI-derived APAR into the Monteith equation. Results indicate spatial variability in LUE values across Iowa and Illinois. Northern regions of both Iowa and Illinois have higher LUE values than southern regions. This trend is reflected in NPP estimates. Results also show that corn has higher LUE values than soybean, resulting in higher NPP for corn than for soybean. Current NPP estimates were compared with NPP estimates from MOD17A3 product and with county inventory-based NPP estimates. Results indicate that current NPP estimates closely agree with inventory-based estimates, and that current NPP estimates are higher than those of the MOD17A3 product. It was also found that when mixed pixels were substituted with nearest pure pixels, revised NPP estimates were improved showing better agreement with inventory-based estimates.

  13. Flight effects on exhaust noise for turbojet and turbofan engines: Comparison of experimental data with prediction

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1976-01-01

    It was demonstrated that static and in flight jet engine exhaust noise can be predicted with reasonable accuracy when the multiple source nature of the problem is taken into account. Jet mixing noise was predicted from the interim prediction method. Provisional methods of estimating internally generated noise and shock noise flight effects were used, based partly on existing prediction methods and partly on recent reported engine data.

  14. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    PubMed

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  16. [Development of an Excel spreadsheet for meta-analysis of indirect and mixed treatment comparisons].

    PubMed

    Tobías, Aurelio; Catalá-López, Ferrán; Roqué, Marta

    2014-01-01

    Meta-analyses in clinical research usually aimed to evaluate treatment efficacy and safety in direct comparison with a unique comparator. Indirect comparisons, using the Bucher's method, can summarize primary data when information from direct comparisons is limited or nonexistent. Mixed comparisons allow combining estimates from direct and indirect comparisons, increasing statistical power. There is a need for simple applications for meta-analysis of indirect and mixed comparisons. These can easily be conducted using a Microsoft Office Excel spreadsheet. We developed a spreadsheet for indirect and mixed effects comparisons of friendly use for clinical researchers interested in systematic reviews, but non-familiarized with the use of more advanced statistical packages. The use of the proposed Excel spreadsheet for indirect and mixed comparisons can be of great use in clinical epidemiology to extend the knowledge provided by traditional meta-analysis when evidence from direct comparisons is limited or nonexistent.

  17. Comparison of Two Static in Vitro Digestion Methods for Screening the Bioaccessibility of Carotenoids in Fruits, Vegetables, and Animal Products.

    PubMed

    Rodrigues, Daniele B; Chitchumroonchokchai, Chureeporn; Mariutti, Lilian R B; Mercadante, Adriana Z; Failla, Mark L

    2017-12-27

    In vitro digestion methods are routinely used to assess the bioaccessibility of carotenoids and other dietary lipophilic compounds. Here, we compared the recovery of carotenoids and their efficiency of micellarization in digested fruits, vegetables, egg yolk, and salmon and also in mixed-vegetable salads with and without either egg yolk or salmon using the static INFOGEST method22 and the procedure of Failla et al.16 Carotenoid stability during the simulated digestion was ≥70%. The efficiencies of the partitioning of carotenoids into mixed micelles were similar when individual plant foods and salad meals were digested using the two static methods. Furthermore, the addition of cooked egg or salmon to vegetable salads increased the bioaccessibility of some carotenoids. Our findings showed that the two methods of in vitro digestion generated similar estimates of carotenoid retention and bioaccessibility for diverse foods.

  18. Internal trip capture estimator for mixed-use developments.

    DOT National Transportation Integrated Search

    2007-12-01

    This report describes a spreadsheet tool for estimating trip generation for mixed-use developments, : accounting for internal trip capture. Internal trip capture is the portion of trips generated by a mixed-use : development that both begin and end w...

  19. 3D modelling of the flow of self-compacting concrete with or without steel fibres. Part II: L-box test and the assessment of fibre reorientation during the flow

    NASA Astrophysics Data System (ADS)

    Deeb, R.; Kulasegaram, S.; Karihaloo, B. L.

    2014-12-01

    The three-dimensional Lagrangian particle-based smooth particle hydrodynamics method described in Part I of this two-part paper is used to simulate the flow of self-compacting concrete (SCC) with and without steel fibres in the L-box configuration. As in Part I, the simulation of the SCC mixes without fibres emphasises the distribution of large aggregate particles of different sizes throughout the flow, whereas the simulation of high strength SCC mixes which contain steel fibres is focused on the distribution of fibres and their orientation during the flow. The capabilities of this methodology are validated by comparing the simulation results with the L-box test carried out in the laboratory. A simple method is developed to assess the reorientation and distribution of short steel fibres in self-compacting concrete mixes during the flow. The reorientation of the fibres during the flow is used to estimate the fibre orientation factor (FOF) in a cross section perpendicular to the principal direction of flow. This estimation procedure involves the number of fibres cut by the section and their inclination to the cutting plane. This is useful to determine the FOF in practical image analysis on cut sections.

  20. Adjusted peak-flow frequency estimates for selected streamflow-gaging stations in or near Montana based on data through water year 2011: Chapter D in Montana StreamStats

    USGS Publications Warehouse

    Sando, Steven K.; Sando, Roy; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The climatic conditions of the specific time period during which peak-flow data were collected at a given streamflow-gaging station (hereinafter referred to as gaging station) can substantially affect how well the peak-flow frequency (hereinafter referred to as frequency) results represent long-term hydrologic conditions. Differences in the timing of the periods of record can result in substantial inconsistencies in frequency estimates for hydrologically similar gaging stations. Potential for inconsistency increases with decreasing peak-flow record length. The representativeness of the frequency estimates for a short-term gaging station can be adjusted by various methods including weighting the at-site results in association with frequency estimates from regional regression equations (RREs) by using the Weighted Independent Estimates (WIE) program. Also, for gaging stations that cannot be adjusted by using the WIE program because of regulation or drainage areas too large for application of RREs, frequency estimates might be improved by using record extension procedures, including a mixed-station analysis using the maintenance of variance type I (MOVE.1) procedure. The U.S. Geological Survey, in cooperation with the Montana Department of Transportation and the Montana Department of Natural Resources and Conservation, completed a study to provide adjusted frequency estimates for selected gaging stations through water year 2011.The purpose of Chapter D of this Scientific Investigations Report is to present adjusted frequency estimates for 504 selected streamflow-gaging stations in or near Montana based on data through water year 2011. Estimates of peak-flow magnitudes for the 66.7-, 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities are reported. These annual exceedance probabilities correspond to the 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.The at-site frequency estimates were adjusted by weighting with frequency estimates from RREs using the WIE program for 438 selected gaging stations in Montana. These 438 selected gaging stations (1) had periods of record less than or equal to 40 years, (2) represented unregulated or minor regulation conditions, and (3) had drainage areas less than about 2,750 square miles.The weighted-average frequency estimates obtained by weighting with RREs generally are considered to provide improved frequency estimates. In some cases, there are substantial differences among the at-site frequency estimates, the regression-equation frequency estimates, and the weighted-average frequency estimates. In these cases, thoughtful consideration should be applied when selecting the appropriate frequency estimate. Some factors that might be considered when selecting the appropriate frequency estimate include (1) whether the specific gaging station has peak-flow characteristics that distinguish it from most other gaging stations used in developing the RREs for the hydrologic region; and (2) the length of the peak-flow record and the general climatic characteristics during the period when the peak-flow data were collected. For critical structure-design applications, a conservative approach would be to select the higher of the at-site frequency estimate and the weighted-average frequency estimate.The mixed-station MOVE.1 procedure generally was applied in cases where three or more gaging stations were located on the same large river and some of the gaging stations could not be adjusted using the weighted-average method because of regulation or drainage areas too large for application of RREs. The mixed-station MOVE.1 procedure was applied to 66 selected gaging stations on 19 large rivers.The general approach for using mixed-station record extension procedures to adjust at-site frequencies involved (1) determining appropriate base periods for the gaging stations on the large rivers, (2) synthesizing peak-flow data for the gaging stations with incomplete peak-flow records during the base periods by using the mixed-station MOVE.1 procedure, and (3) conducting frequency analysis on the combined recorded and synthesized peak-flow data for each gaging station. Frequency estimates for the combined recorded and synthesized datasets for 66 gaging stations with incomplete peak-flow records during the base periods are presented. The uncertainties in the mixed-station record extension results are difficult to directly quantify; thus, it is important to understand the intended use of the estimated frequencies based on analysis of the combined recorded and synthesized datasets. The estimated frequencies are considered general estimates of frequency relations among gaging stations on the same stream channel that might be expected if the gaging stations had been gaged during the same long-term base period. However, because the mixed-station record extension procedures involve secondary statistical analysis with accompanying errors, the uncertainty of the frequency estimates is larger than would be obtained by collecting systematic records for the same number of years in the base period.

  1. Efficient Research Design: Using Value-of-Information Analysis to Estimate the Optimal Mix of Top-down and Bottom-up Costing Approaches in an Economic Evaluation alongside a Clinical Trial.

    PubMed

    Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee

    2016-04-01

    In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.

  2. 40 CFR 227.29 - Initial mixing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... estimated by one of these methods, in order of preference: (1) When field data on the proposed dumping are... conjunction with an appropriate mathematical model acceptable to EPA or the District Engineer, as appropriate... proposed for discharge are available, these shall be used in conjunction with an appropriate mathematical...

  3. Tunnel Cost-Estimating Methods.

    DTIC Science & Technology

    1981-10-01

    wears down the bit very quickly; stratified rock or mixed face results in uneven thrust and excessive bearing wear and can cause large rocks to jam ...of Reclamation (USBIJREC) Strawberry Aqueduct System, is approximately 3-1/4 miles long and 13 ft in diameter. It was moled through hard sandstone and

  4. Stand density guides for predicting growth of forest tress of southwest Idaho

    Treesearch

    Douglas D. Basford; John Sloan; Joy Roberts

    2010-01-01

    This paper presents a method for estimating stand growth from stand density and average diameter in stands of pure and mixed species in Southwest Idaho. The methods are adapted from a model developed for Douglas-fir, ponderosa pine, and lodgepole pine on the Salmon National Forest. Growth data were derived from ponderosa pine increment cores taken from sample plots on...

  5. Average stand age from forest inventory plots does not describe historical fire regimes in ponderosa pine and mixed-conifer forests of western North America

    Treesearch

    Jens T. Stevens; Hugh D. Safford; Malcolm P. North; Jeremy S. Fried; Andrew N. Gray; Peter M. Brown; Christopher R. Dolanc; Solomon Z. Dobrowski; Donald A. Falk; Calvin A. Farris; Jerry F. Franklin; Peter Z. Fulé; R. Keala Hagmann; Eric E. Knapp; Jay D. Miller; Douglas F. Smith; Thomas W. Swetnam; Alan H. Taylor; Julia A. Jones

    2016-01-01

    Quantifying historical fire regimes provides important information for managing contemporary forests. Historical fire frequency and severity can be estimated using several methods; each method has strengths and weaknesses and presents challenges for interpretation and verification. Recent efforts to quantify the timing of historical high-severity fire events in forests...

  6. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  7. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  8. Mixed-layered bismuth-oxygen-iodine materials for capture and waste disposal of radioactive iodine

    DOEpatents

    Krumhansl, James L; Nenoff, Tina M

    2013-02-26

    Materials and methods of synthesizing mixed-layered bismuth oxy-iodine materials, which can be synthesized in the presence of aqueous radioactive iodine species found in caustic solutions (e.g. NaOH or KOH). This technology provides a one-step process for both iodine sequestration and storage from nuclear fuel cycles. It results in materials that will be durable for repository conditions much like those found in Waste Isolation Pilot Plant (WIPP) and estimated for Yucca Mountain (YMP). By controlled reactant concentrations, optimized compositions of these mixed-layered bismuth oxy-iodine inorganic materials are produced that have both a high iodine weight percentage and a low solubility in groundwater environments.

  9. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  10. Generation of realistic scene using illuminant estimation and mixed chromatic adaptation

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chul; Hong, Sang-Gi; Kim, Dong-Ho; Park, Jong-Hyun

    2003-12-01

    The algorithm of combining a real image with a virtual model was proposed to increase the reality of synthesized images. Currently, synthesizing a real image with a virtual model facilitated the surface reflection model and various geometric techniques. In the current methods, the characteristics of various illuminants in the real image are not sufficiently considered. In addition, despite the chromatic adaptation plays a vital role for accommodating different illuminants in the two media viewing conditions, it is not taken into account in the existing methods. Thus, it is hardly to get high-quality synthesized images. In this paper, we proposed the two-phase image synthesis algorithm. First, the surface reflectance of the maximum high-light region (MHR) was estimated using the three eigenvectors obtained from the principal component analysis (PCA) applied to the surface reflectances of 1269 Munsell samples. The combined spectral value, i.e., the product of surface reflectance and the spectral power distributions (SPDs) of an illuminant, of MHR was then estimated using the three eigenvectors obtained from PCA applied to the products of surface reflectances of Munsell 1269 samples and the SPDs of four CIE Standard Illuminants (A, C, D50, D65). By dividing the average combined spectral values of MHR by the average surface reflectances of MHR, we could estimate the illuminant of a real image. Second, the mixed chromatic adaptation (S-LMS) using an estimated and an external illuminants was applied to the virtual-model image. For evaluating the proposed algorithm, experiments with synthetic and real scenes were performed. It was shown that the proposed method was effective in synthesizing the real and the virtual scenes under various illuminants.

  11. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  12. Comparison of Sexual Mixing Patterns for Syphilis in Endemic and Outbreak Settings

    PubMed Central

    Doherty, Irene A; Adimora, Adaora A; Muth, Stephen Q; Serre, Marc L; Leone, Peter A; Miller, William C

    2015-01-01

    Background In a largely rural region of North Carolina during 1998–2002, outbreaks occurred of heterosexually-transmitted syphilis, tied to crack cocaine use and exchange of sex for drugs and money. Sexual partnership mixing patterns are an important characteristic of sexual networks that relate to transmission dynamics of STIs. Methods Using contact tracing data collected by Disease Intervention Specialists, we estimated Newman assortativity coefficients and compared values in counties experiencing syphilis outbreaks to non-outbreak counties, with respect to race/ethnicity, race/ethnicity and age, and the cases' number of social/sexual contacts, infected contacts, sex partners, and infected sex partners, and syphilis disease stage (primary, secondary, early latent). Results Individuals in the outbreak counties had more contacts and mixing by the number of sex partners was disassortative in outbreak counties and assortative non-outbreak counties. Whereas mixing by syphilis disease stage was minimally assortative in outbreak counties, it was disassortative in non-outbreak areas. Partnerships were relatively discordant by age, especially among older White men, who often chose considerably younger female partners. Conclusions Whether assortative mixing exacerbates or attenuates the reach of STIs into different populations depends on the characteristic/attribute and epidemiologic phase. Examination of sexual partnership characteristics and mixing patterns offers insights into the growth of STI outbreaks that complement other research methods. PMID:21217418

  13. Plasma-Based Mixing Actuation in Airflow, Quantitated by Probe Breakdown Fluorescence

    NASA Astrophysics Data System (ADS)

    Leonov, Sergey; Firsov, Alexander; Shurupov, Michail; Yarantsev, Dmitry; Ohio State University Team; JIHT RAS Team

    2013-09-01

    Effective mixing of fuel and oxidizer in air-breathing engine at compressible conditions is an essential problem of high-speed combustion due to short residence time of gas mixture in the combustor of limited length. The effect of the mixing actuation by plasma is observed because of the gasdynamic instability arisen after the long filamentary discharge of submicrosecond duration generated along the contact zone of two co-flown gases. The work is focused on detail consideration of the mechanism of gas instability, promoted by plasma, on effect of the discharge specific localization, and on diagnostics development for qualitative and quantitative estimation of the mixing efficiency. The dynamics of relative concentration of gas components is examined quantitatively by means of Probe Discharge Breakdown Fluorescence (PBF). In this method an optical emission spectra of weak filamentary high-voltage nanosecond probe discharge are collected from local zone of interest in airflow. The first measurements of the mixing efficiency in vicinity of wall-injected secondary gas are presented. It is shown that the method of PBF could deliver experimental data on state of the two-component medium with <1 mcs and <5 mm of time and spatial resolution, correspondingly. Funded by AFOSR under Dr Chiping Li supervision

  14. An investigation on the population structure of mixed infections of Mycobacterium tuberculosis in Inner Mongolia, China.

    PubMed

    Wang, Xiaoying; Liu, Haican; Wei, Jianhao; Wu, Xiaocui; Yu, Qin; Zhao, Xiuqin; Lyu, Jianxin; Lou, Yongliang; Wan, Kanglin

    2015-12-01

    Mixed infections of Mycobacterium tuberculosis strains have attracted more attention due to their increasing frequencies worldwide, especially in the areas of high tuberculosis (TB) prevalence. In this study, we accessed the rates of mixed infections in a setting with high TB prevalence in Inner Mongolia Autonomous Region of China. A total of 384 M. tuberculosis isolates from the local TB hospital were subjected to mycobacterial interspersed repetitive unit-variable number tandem repeat (MIRU-VNTR) typing method. The single clones of the strains with mixed infections were separated by subculturing them on the Löwenstein-Jensen medium. Of these 384 isolates, twelve strains (3.13%) were identified as mixed infections by MIRU-VNTR. Statistical analysis indicated that demographic characteristics and drug susceptibility profiles showed no statistically significant association with the mixed infections. We further subcultured the mixed infection strains and selected 30 clones from the subculture for each mixed infection. Genotyping data revealed that eight (8/12, 66.7%) strains with mixed infections had converted into single infection through subculture. The higher growth rate was associated with the increasing proportion of variant subpopulation through subculture. In conclusion, by using the MIRU-VNTR method, we demonstrate that the prevalence of mixed infections in Inner Mongolia is low. Additionally, our findings reveal that the subculture changes the population structures of mixed infections, and the subpopulation with higher growth rate show better fitness, which is associated with high proportion among the population structure after subculture. This study highlights that the use of clinical specimens, rather than subcultured isolates, is preferred to estimate the prevalence of mixed infections in the specific regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Dynamic properties of composite cemented clay.

    PubMed

    Cai, Yuan-Qiang; Liang, Xu

    2004-03-01

    In this work, the dynamic properties of composite cemented clay under a wide range of strains were studied considering the effect of different mixing ratio and the change of confining pressures through dynamic triaxial test. A simple and practical method to estimate the dynamic elastic modulus and damping ratio is proposed in this paper and a related empirical normalized formula is also presented. The results provide useful guidelines for preliminary estimation of cement requirements to improve the dynamic properties of clays.

  16. Number Needed to Benefit From Information (NNBI): Proposal From a Mixed Methods Research Study With Practicing Family Physicians

    PubMed Central

    Pluye, Pierre; Grad, Roland M.; Johnson-Lafleur, Janique; Granikov, Vera; Shulha, Michael; Marlow, Bernard; Ricarte, Ivan Luiz Marques

    2013-01-01

    PURPOSE We wanted to describe family physicians’ use of information from an electronic knowledge resource for answering clinical questions, and their perception of subsequent patient health outcomes; and to estimate the number needed to benefit from information (NNBI), defined as the number of patients for whom clinical information was retrieved for 1 to benefit. METHODS We undertook a mixed methods research study, combining quantitative longitudinal and qualitative research studies. Participants were 41 family physicians from primary care clinics across Canada. Physicians were given access to 1 electronic knowledge resource on handheld computer in 2008–2009. For the outcome assessment, participants rated their searches using a validated method. Rated searches were examined during interviews guided by log reports that included ratings. Cases were defined as clearly described searches where clinical information was used for a specific patient. For each case, interviewees described information-related patient health outcomes. For the mixed methods data analysis, quantitative and qualitative data were merged into clinical vignettes (each vignette describing a case). We then estimated the NNBI. RESULTS In 715 of 1,193 searches for information conducted during an average of 86 days, the search objective was directly linked to a patient. Of those searches, 188 were considered to be cases. In 53 cases, participants associated the use of information with at least 1 patient health benefit. This finding suggested an NNBI of 14 (715/53). CONCLUSION The NNBI may be used in further experimental research to compare electronic knowledge resources. A low NNBI can encourage clinicians to search for information more frequently. If all searches had benefits, the NNBI would be 1. In addition to patient benefits, learning and knowledge reinforcement outcomes are frequently reported. PMID:24218380

  17. Comparison of multi-subject ICA methods for analysis of fMRI data

    PubMed Central

    Erhardt, Erik Barry; Rachakonda, Srinivas; Bedrick, Edward; Allen, Elena; Adali, Tülay; Calhoun, Vince D.

    2010-01-01

    Spatial independent component analysis (ICA) applied to functional magnetic resonance imaging (fMRI) data identifies functionally connected networks by estimating spatially independent patterns from their linearly mixed fMRI signals. Several multi-subject ICA approaches estimating subject-specific time courses (TCs) and spatial maps (SMs) have been developed, however there has not yet been a full comparison of the implications of their use. Here, we provide extensive comparisons of four multi-subject ICA approaches in combination with data reduction methods for simulated and fMRI task data. For multi-subject ICA, the data first undergo reduction at the subject and group levels using principal component analysis (PCA). Comparisons of subject-specific, spatial concatenation, and group data mean subject-level reduction strategies using PCA and probabilistic PCA (PPCA) show that computationally intensive PPCA is equivalent to PCA, and that subject-specific and group data mean subject-level PCA are preferred because of well-estimated TCs and SMs. Second, aggregate independent components are estimated using either noise free ICA or probabilistic ICA (PICA). Third, subject-specific SMs and TCs are estimated using back-reconstruction. We compare several direct group ICA (GICA) back-reconstruction approaches (GICA1-GICA3) and an indirect back-reconstruction approach, spatio-temporal regression (STR, or dual regression). Results show the earlier group ICA (GICA1) approximates STR, however STR has contradictory assumptions and may show mixed-component artifacts in estimated SMs. Our evidence-based recommendation is to use GICA3, introduced here, with subject-specific PCA and noise-free ICA, providing the most robust and accurate estimated SMs and TCs in addition to offering an intuitive interpretation. PMID:21162045

  18. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    NASA Astrophysics Data System (ADS)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  19. A Multi-wavenumber Theory for Eddy Diffusivities: Applications to the DIMES Region

    NASA Astrophysics Data System (ADS)

    Chen, R.; Gille, S. T.; McClean, J.; Flierl, G.; Griesel, A.

    2014-12-01

    Climate models are sensitive to the representation of ocean mixing processes. This has motivated recent efforts to collect observations aimed at improving mixing estimates and parameterizations. The US/UK field program Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES), begun in 2009, is providing such estimates upstream of and within the Drake Passage. This region is characterized by topography, and strong zonal jets. In previous studies, mixing length theories, based on the assumption that eddies are dominated by a single wavenumber and phase speed, were formulated to represent the estimated mixing patterns in jets. However, in spite of the success of the single wavenumber theory in some other scenarios, it does not effectively predict the vertical structures of observed eddy diffusivities in the DIMES area. Considering that eddy motions encompass a wide range of wavenumbers, which all contribute to mixing, in this study we formulated a multi-wavenumber theory to predict eddy mixing rates. We test our theory for a domain encompassing the entire Southern Ocean. We estimated eddy diffusivities and mixing lengths from one million numerical floats in a global eddying model. These float-based mixing estimates were compared with the predictions from both the single-wavenumber and the multi-wavenumber theories. Our preliminary results in the DIMES area indicate that, compared to the single-wavenumber theory, the multi-wavenumber theory better predicts the vertical mixing structures in the vast areas where the mean flow is weak; however in the intense jet region, both theories have similar predictive skill.

  20. Evaluating the Accuracy of Common Runoff Estimation Methods for New Impervious Hot-Mix Asphalt

    EPA Science Inventory

    Accurately predicting runoff volume from impervious surfaces for water quality design events (e.g., 25.4 mm) is important for sizing green infrastructure stormwater control measures to meet water quality and infiltration design targets. The objective of this research was to quan...

  1. Quantifying Evaporation and Evaluating Runoff Estimation Methods in a Permeable Pavement System

    EPA Science Inventory

    The U.S. Environmental Protection Agency constructed a 0.4-ha parking lot in Edison, New Jersey, that incorporated permeable pavement in the parking lanes which were designed to receive run-on from the impervious hot-mix asphalt driving lanes. Twelve lined permeable pavement sec...

  2. An Investigation of Comprehension Processes among Adolescent English Learners with Reading Difficulties

    ERIC Educational Resources Information Center

    Lesaux, Nonie K.; Harris, Julie Russ

    2017-01-01

    This mixed-methods study examines the reading skills and processes of early adolescent Latino English learners demonstrating below-average reading comprehension performance (N = 41, mean age = 13 years). Standardized measures were used to estimate participants' word reading and vocabulary knowledge, and interviews were conducted to examine reading…

  3. Estimates of water source contributions in a dynamic urban water supply system inferred via a Bayesian stable isotope mixing model

    NASA Astrophysics Data System (ADS)

    Jameel, M. Y.; Brewer, S.; Fiorella, R.; Tipple, B. J.; Bowen, G. J.; Terry, S.

    2017-12-01

    Public water supply systems (PWSS) are complex distribution systems and critical infrastructure, making them vulnerable to physical disruption and contamination. Exploring the susceptibility of PWSS to such perturbations requires detailed knowledge of the supply system structure and operation. Although the physical structure of supply systems (i.e., pipeline connection) is usually well documented for developed cities, the actual flow patterns of water in these systems are typically unknown or estimated based on hydrodynamic models with limited observational validation. Here, we present a novel method for mapping the flow structure of water in a large, complex PWSS, building upon recent work highlighting the potential of stable isotopes of water (SIW) to document water management practices within complex PWSS. We sampled a major water distribution system of the Salt Lake Valley, Utah, measuring SIW of water sources, treatment facilities, and numerous sites within in the supply system. We then developed a hierarchical Bayesian (HB) isotope mixing model to quantify the proportion of water supplied by different sources at sites within the supply system. Known production volumes and spatial distance effects were used to define the prior probabilities for each source; however, we did not include other physical information about the supply system. Our results were in general agreement with those obtained by hydrodynamic models and provide quantitative estimates of contributions of different water sources to a given site along with robust estimates of uncertainty. Secondary properties of the supply system, such as regions of "static" and "dynamic" source (e.g., regions supplied dominantly by one source vs. those experiencing active mixing between multiple sources), can be inferred from the results. The isotope-based HB isotope mixing model offers a new investigative technique for analyzing PWSS and documenting aspects of supply system structure and operation that are otherwise challenging to observe. The method could allow water managers to document spatiotemporal variation in PWSS flow patterns, critical for interrogating the distribution system to inform operation decision making or disaster response, optimize water supply and, monitor and enforce water rights.

  4. Near surface bulk density estimates of NEAs from radar observations and permittivity measurements of powdered geologic material

    NASA Astrophysics Data System (ADS)

    Hickson, Dylan; Boivin, Alexandre; Daly, Michael G.; Ghent, Rebecca; Nolan, Michael C.; Tait, Kimberly; Cunje, Alister; Tsai, Chun An

    2018-05-01

    The variations in near-surface properties and regolith structure of asteroids are currently not well constrained by remote sensing techniques. Radar is a useful tool for such determinations of Near-Earth Asteroids (NEAs) as the power of the reflected signal from the surface is dependent on the bulk density, ρbd, and dielectric permittivity. In this study, high precision complex permittivity measurements of powdered aluminum oxide and dunite samples are used to characterize the change in the real part of the permittivity with the bulk density of the sample. In this work, we use silica aerogel for the first time to increase the void space in the samples (and decrease the bulk density) without significantly altering the electrical properties. We fit various mixing equations to the experimental results. The Looyenga-Landau-Lifshitz mixing formula has the best fit and the Lichtenecker mixing formula, which is typically used to approximate planetary regolith, does not model the results well. We find that the Looyenga-Landau-Lifshitz formula adequately matches Lunar regolith permittivity measurements, and we incorporate it into an existing model for obtaining asteroid regolith bulk density from radar returns which is then used to estimate the bulk density in the near surface of NEA's (101955) Bennu and (25143) Itokawa. Constraints on the material properties appropriate for either asteroid give average estimates of ρbd = 1.27 ± 0.33g/cm3 for Bennu and ρbd = 1.68 ± 0.53g/cm3 for Itokawa. We conclude that our data suggest that the Looyenga-Landau-Lifshitz mixing model, in tandem with an appropriate radar scattering model, is the best method for estimating bulk densities of regoliths from radar observations of airless bodies.

  5. Use of multispectral satellite remote sensing to assess mixing of suspended sediment downstream of large river confluences

    NASA Astrophysics Data System (ADS)

    Umar, M.; Rhoads, Bruce L.; Greenberg, Jonathan A.

    2018-01-01

    Although past work has noted that contrasts in turbidity often are detectable on remotely sensed images of rivers downstream from confluences, no systematic methodology has been developed for assessing mixing over distance of confluent flows with differing surficial suspended sediment concentrations (SSSC). In contrast to field measurements of mixing below confluences, satellite remote-sensing can provide detailed information on spatial distributions of SSSC over long distances. This paper presents a methodology that uses remote-sensing data to estimate spatial patterns of SSSC downstream of confluences along large rivers and to determine changes in the amount of mixing over distance from confluences. The method develops a calibrated Random Forest (RF) model by relating training SSSC data from river gaging stations to derived spectral indices for the pixels corresponding to gaging-station locations. The calibrated model is then used to predict SSSC values for every river pixel in a remotely sensed image, which provides the basis for mapping of spatial variability in SSSCs along the river. The pixel data are used to estimate average surficial values of SSSC at cross sections spaced uniformly along the river. Based on the cross-section data, a mixing metric is computed for each cross section. The spatial pattern of change in this metric over distance can be used to define rates and length scales of surficial mixing of suspended sediment downstream of a confluence. This type of information is useful for exploring the potential influence of various controlling factors on mixing downstream of confluences, for evaluating how mixing in a river system varies over time and space, and for determining how these variations influence water quality and ecological conditions along the river.

  6. Numerical simulation of a plane turbulent mixing layer, with applications to isothermal, rapid reactions

    NASA Technical Reports Server (NTRS)

    Lin, P.; Pratt, D. T.

    1987-01-01

    A hybrid method has been developed for the numerical prediction of turbulent mixing in a spatially-developing, free shear layer. Most significantly, the computation incorporates the effects of large-scale structures, Schmidt number and Reynolds number on mixing, which have been overlooked in the past. In flow field prediction, large-eddy simulation was conducted by a modified 2-D vortex method with subgrid-scale modeling. The predicted mean velocities, shear layer growth rates, Reynolds stresses, and the RMS of longitudinal velocity fluctuations were found to be in good agreement with experiments, although the lateral velocity fluctuations were overpredicted. In scalar transport, the Monte Carlo method was extended to the simulation of the time-dependent pdf transport equation. For the first time, the mixing frequency in Curl's coalescence/dispersion model was estimated by using Broadwell and Breidenthal's theory of micromixing, which involves Schmidt number, Reynolds number and the local vorticity. Numerical tests were performed for a gaseous case and an aqueous case. Evidence that pure freestream fluids are entrained into the layer by large-scale motions was found in the predicted pdf. Mean concentration profiles were found to be insensitive to Schmidt number, while the unmixedness was higher for higher Schmidt number. Applications were made to mixing layers with isothermal, fast reactions. The predicted difference in product thickness of the two cases was in reasonable quantitative agreement with experimental measurements.

  7. Bounded influence function based inference in joint modelling of ordinal partial linear model and accelerated failure time model.

    PubMed

    Chakraborty, Arindom

    2016-12-01

    A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.

  8. Absolute parameters for AI Phoenicis using WASP photometry

    NASA Astrophysics Data System (ADS)

    Kirkby-Kent, J. A.; Maxted, P. F. L.; Serenelli, A. M.; Turner, O. D.; Evans, D. F.; Anderson, D. R.; Hellier, C.; West, R. G.

    2016-06-01

    Context. AI Phe is a double-lined, detached eclipsing binary, in which a K-type sub-giant star totally eclipses its main-sequence companion every 24.6 days. This configuration makes AI Phe ideal for testing stellar evolutionary models. Difficulties in obtaining a complete lightcurve mean the precision of existing radii measurements could be improved. Aims: Our aim is to improve the precision of the radius measurements for the stars in AI Phe using high-precision photometry from the Wide Angle Search for Planets (WASP), and use these improved radius measurements together with estimates of the masses, temperatures and composition of the stars to place constraints on the mixing length, helium abundance and age of the system. Methods: A best-fit ebop model is used to obtain lightcurve parameters, with their standard errors calculated using a prayer-bead algorithm. These were combined with previously published spectroscopic orbit results, to obtain masses and radii. A Bayesian method is used to estimate the age of the system for model grids with different mixing lengths and helium abundances. Results: The radii are found to be R1 = 1.835 ± 0.014 R⊙, R2 = 2.912 ± 0.014 R⊙ and the masses M1 = 1.1973 ± 0.0037 M⊙, M2 = 1.2473 ± 0.0039 M⊙. From the best-fit stellar models we infer a mixing length of 1.78, a helium abundance of YAI = 0.26 +0.02-0.01 and an age of 4.39 ± 0.32 Gyr. Times of primary minimum show the period of AI Phe is not constant. Currently, there are insufficient data to determine the cause of this variation. Conclusions: Improved precision in the masses and radii have improved the age estimate, and allowed the mixing length and helium abundance to be constrained. The eccentricity is now the largest source of uncertainty in calculating the masses. Further work is needed to characterise the orbit of AI Phe. Obtaining more binaries with parameters measured to a similar level of precision would allow us to test for relationships between helium abundance and mixing length.

  9. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen

    2011-08-16

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less

  10. Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*

    PubMed Central

    Feehan, Dennis M.; Salganik, Matthew J.

    2018-01-01

    The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167

  11. Tracing water sources of terrestrial animal populations with stable isotopes: laboratory tests with crickets and spiders.

    PubMed

    McCluney, Kevin E; Sabo, John L

    2010-12-31

    Fluxes of carbon, nitrogen, and water between ecosystem components and organisms have great impacts across levels of biological organization. Although much progress has been made in tracing carbon and nitrogen, difficulty remains in tracing water sources from the ecosystem to animals and among animals (the "water web"). Naturally occurring, non-radioactive isotopes of hydrogen and oxygen in water provide a potential method for tracing water sources. However, using this approach for terrestrial animals is complicated by a change in water isotopes within the body due to differences in activity of heavy and light isotopes during cuticular and transpiratory water losses. Here we present a technique to use stable water isotopes to estimate the mean mix of water sources in a population by sampling a group of sympatric animals over time. Strong correlations between H and O isotopes in the body water of animals collected over time provide linear patterns of enrichment that can be used to predict a mean mix of water sources useful in standard mixing models to determine relative source contribution. Multiple temperature and humidity treatment levels do not greatly alter these relationships, thus having little effect on our ability to estimate this population-level mix of water sources. We show evidence for the validity of using multiple samples of animal body water, collected across time, to estimate the isotopic mix of water sources in a population and more accurately trace water sources. The ability to use isotopes to document patterns of animal water use should be a great asset to biologists globally, especially those studying drylands, droughts, streamside areas, irrigated landscapes, and the effects of climate change.

  12. A continuous-time adaptive particle filter for estimations under measurement time uncertainties with an application to a plasma-leucine mixed effects model

    PubMed Central

    2013-01-01

    Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521

  13. Estimating the distribution of colored dissolved organic matter during the Southern Ocean Gas Exchange Experiment using four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.

    2017-03-01

    We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexafluoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefficient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM fields. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.

  14. Estimating the Distribution of Colored Dissolved Organic Matter During the Southern Ocean Gas Exchange Experiment Using Four-Dimensional Variational Data Assimilation

    NASA Technical Reports Server (NTRS)

    Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.

    2017-01-01

    We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexauoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefcient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM elds. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.

  15. Estimation and Partitioning of Heritability in Human Populations using Whole Genome Analysis Methods

    PubMed Central

    Vinkhuyzen, Anna AE; Wray, Naomi R; Yang, Jian; Goddard, Michael E; Visscher, Peter M

    2014-01-01

    Understanding genetic variation of complex traits in human populations has moved from the quantification of the resemblance between close relatives to the dissection of genetic variation into the contributions of individual genomic loci. But major questions remain unanswered: how much phenotypic variation is genetic, how much of the genetic variation is additive and what is the joint distribution of effect size and allele frequency at causal variants? We review and compare three whole-genome analysis methods that use mixed linear models (MLM) to estimate genetic variation, using the relationship between close or distant relatives based on pedigree or SNPs. We discuss theory, estimation procedures, bias and precision of each method and review recent advances in the dissection of additive genetic variation of complex traits in human populations that are based upon the application of MLM. Using genome wide data, SNPs account for far more of the genetic variation than the highly significant SNPs associated with a trait, but they do not account for all of the genetic variance estimated by pedigree based methods. We explain possible reasons for this ‘missing’ heritability. PMID:23988118

  16. An empirical inferential method of estimating nitrogen deposition to Mediterranean-type ecosystems: the San Bernardino Mountains case study.

    PubMed

    Bytnerowicz, A; Johnson, R F; Zhang, L; Jenerette, G D; Fenn, M E; Schilling, S L; Gonzalez-Fernandez, I

    2015-08-01

    The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4(+) and NO3(-); stomatal conductance of NH3, NO, NO2 and HNO3; and satellite-derived LAI. Estimated deposition is based on data collected during 2002-2006 in the San Bernardino Mountains (SBM) of southern California. Approximately 2/3 of dry N deposition was to plant surfaces and 1/3 as stomatal uptake. Summer-season N deposition ranged from <3 kg ha(-1) in the eastern SBM to ∼ 60 kg ha(-1) in the western SBM near the Los Angeles Basin and compared well with the throughfall and big-leaf micrometeorological inferential methods. Extrapolating summertime N deposition estimates to annual values showed large areas of the SBM exceeding critical loads for nutrient N in chaparral and mixed conifer forests. Published by Elsevier Ltd.

  17. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  18. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  19. Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques

    USGS Publications Warehouse

    Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.

    2013-01-01

    Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.

  20. A quantitative approach to combine sources in stable isotope mixing models

    EPA Science Inventory

    Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...

  1. Radon Measurements of Atmospheric Mixing (RAMIX) 2006–2014 Final Campaign Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, ML; Biraud, SC

    2015-05-01

    Uncertainty in vertical mixing between the surface layer, boundary layer, and free troposphere leads to large uncertainty in “top-down” estimates of regional land-atmosphere carbon exchange (i.e., estimates based on measurements of atmospheric CO2 mixing ratios. Radon-222 (222Rn) is a valuable tracer for measuring atmospheric mixing because it is emitted from the land surface and has a short enough half-life (3.8 days) to allow characterization of mixing processes based on vertical profile measurements.

  2. Radon Measurements of Atmospheric Mixing (RAMIX) 2006–2014 Final Campaign Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, ML; Biraud, SC; Hirsch, A

    2015-05-01

    Uncertainty in vertical mixing between the surface layer, boundary layer, and free troposphere leads to large uncertainty in “top-down” estimates of regional land-atmosphere carbon exchange (i.e., estimates based on measurements of atmospheric CO 2 mixing ratios). The radioisotope radon-222 ( 222Rn) is a valuable tracer for measuring atmospheric mixing because it is emitted from the land surface and has a short enough half-life (3.8 days) to allow characterization of mixing processes based on vertical profile measurements.

  3. Nuclear mass formula with the shell energies obtained by a new method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koura, H.; Tachibana, T.; Yamada, M.

    1998-12-21

    Nuclear shapes and masses are estimated by a new method. The main feature of this method lies in estimating shell energies of deformed nuclei from spherical shell energies by mixing them with appropriate weights. The spherical shell energies are calculated from single-particle potentials, and, till now, two mass formulas have been constructed from two different sets of potential parameters. The standard deviation of the calculated masses from all the experimental masses of the 1995 Mass Evaluation is about 760 keV. Contrary to the mass formula by Tachibana, Uno, Yamada and Yamada in the 1987-1988 Atomic Mass Predictions, the present formulasmore » can give nuclear shapes and predict on super-heavy elements.« less

  4. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  5. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  6. A UNIFIED FRAMEWORK FOR VARIANCE COMPONENT ESTIMATION WITH SUMMARY STATISTICS IN GENOME-WIDE ASSOCIATION STUDIES.

    PubMed

    Zhou, Xiang

    2017-12-01

    Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.

  7. Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects

    PubMed Central

    Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.

    2015-01-01

    The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135

  8. Expecting the unexpected: A mixed methods study of violence to EMS responders in an urban fire department.

    PubMed

    Taylor, Jennifer A; Barnes, Brittany; Davis, Andrea L; Wright, Jasmine; Widman, Shannon; LeVasseur, Michael

    2016-02-01

    Struck by injuries experienced by females were observed to be higher compared to males in an urban fire department. The disparity was investigated while gaining a grounded understanding of EMS responder experiences from patient-initiated violence. A convergent parallel mixed methods design was employed. Using a linked injury dataset, patient-initiated violence estimates were calculated comparing genders. Semi-structured interviews and a focus group were conducted with injured EMS responders. Paramedics had significantly higher odds for patient-initiated violence injuries than firefighters (OR 14.4, 95%CI: 9.2-22.2, P < 0.001). Females reported increased odds of patient-initiated violence injuries compared to males (OR = 6.25, 95%CI 3.8-10.2), but this relationship was entirely mediated through occupation (AOR = 1.64, 95%CI 0.94-2.85). Qualitative data illuminated the impact of patient-initiated violence and highlighted important organizational opportunities for intervention. Mixed methods greatly enhanced the assessment of EMS responder patient-initiated violence prevention. © 2016 The Authors. American Journal of Industrial Medicine Published by Wiley Periodicals, Inc.

  9. Mixed-layered bismuth--oxygen--iodine materials for capture and waste disposal of radioactive iodine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krumhansl, James L; Nenoff, Tina M

    2015-01-06

    Materials and methods of synthesizing mixed-layered bismuth oxy-iodine materials, which can be synthesized in the presence of aqueous radioactive iodine species found in caustic solutions (e.g. NaOH or KOH). This technology provides a one-step process for both iodine sequestration and storage from nuclear fuel cycles. It results in materials that will be durable for repository conditions much like those found in Waste Isolation Pilot Plant (WIPP) and estimated for Yucca Mountain (YMP). By controlled reactant concentrations, optimized compositions of these mixed-layered bismuth oxy-iodine inorganic materials are produced that have both a high iodine weight percentage and a low solubility inmore » groundwater environments.« less

  10. The covariance of temperature and ozone due to planetary-wave forcing

    NASA Technical Reports Server (NTRS)

    Fraser, G. J.

    1976-01-01

    The cross-spectra of temperature and ozone mass mixing ratio at 42 km and 28 km has been determined for spring (1971) and summer (1971-2) over Christchurch, New Zealand (44 S, 172 E). The sources of data are the SCR and BUV experiments on Nimbus 4. The observed covariances are compared with a model in which the temperature and ozone perturbations are forced by an upward propagating planetary wave. The agreement between the observations and the model is reasonable. It is suggested that this cross-spectral method permits an estimate of the meridional gradient of ozone mass mixing ratio from measurements of the vertical profile of ozone mass mixing ratio at one location, supported by temperature profiles from at least two locations.

  11. Quantifying inter- and intra-population niche variability using hierarchical bayesian stable isotope mixing models.

    PubMed

    Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T

    2009-07-09

    Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.

  12. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  13. Mixed layer warming-deepening in the Mediterranean Sea and its effect on the marine environment

    NASA Astrophysics Data System (ADS)

    Rivetti, Irene; Boero, Ferdinando; Fraschetti, Simonetta; Zambianchi, Enrico; Lionello, Piero

    2015-04-01

    This work aims at investigating the evolution of the ocean mixed layer in the Mediterranean Sea and linking it to the occurrence of mass mortalities of benthic invertebrates. The temporal evolution of selected parameters describing the mixed layer and the seasonal thermocline is provided for the whole Mediterranean Sea for spring, summer and autumn and for the period 1945-2011. For this analysis all temperature profiles collected in the basin with bottles, Mechanical Bathy-Thermographs (MBT), eXpendable Bathy-Thermographs (XBT), and Conductivity-Temperature-Depth (CTD) have been used (166,990). These data have been extracted from three public sources: the MEDAR-MEDATLAS, the World Ocean Database 2013 and the MFS-VOS program. Five different methods for estimating the mixed layer depth are compared using temperature profiles collected at the DYFAMED station in the Ligurian Sea and one method, the so-called three-segment method, has been selected for a systematic analysis of the evolution of the uppermost part of the whole Mediterranean Sea. This method approximates the upper water column with three segments representing mixed layer, thermocline and deep layer and has shown to be the most suitable method for capturing the mixed layer depth for most shapes of temperature profiles. Mass mortalities events of benthic invertebrates have been identified by an extensive search of all data bases in ISI Web of Knowledge considering studies published from 1945 to 2011. Studies reporting the geographical coordinates, the timing of the events, the species involved and the depth at which signs of stress occurred have been considered. Results show a general increase of thickness and temperature of the mixed layer, deepening and cooling of the thermocline base in summer and autumn. Possible impacts of these changes are mass mortalities events of benthic invertebrates that have been documented since 1983 mainly in summer and autumn. It is also shown that most mass mortalities occurred in months with anomalously high mixed layer depth temperature leading to the conclusion that warming of upper Mediterranean Sea has allowed interannual temperature variability to reach environmental conditions beyond the thermal tolerance of some species.

  14. Longitudinal Models of Reading Achievement of Students with Learning Disabilities and without Disabilities

    ERIC Educational Resources Information Center

    Sullivan, Amanda L.; Kohli, Nidhi; Farnsworth, Elyse M.; Sadeh, Shanna; Jones, Leila

    2017-01-01

    Objective: Accurate estimation of developmental trajectories can inform instruction and intervention. We compared the fit of linear, quadratic, and piecewise mixed-effects models of reading development among students with learning disabilities relative to their typically developing peers. Method: We drew an analytic sample of 1,990 students from…

  15. A Longitudinal Study of ESL Learners' Fluency and Comprehensibility Development

    ERIC Educational Resources Information Center

    Derwing, Tracey M.; Munro, Murray J.; Thomson, Ron I.

    2008-01-01

    This longitudinal mixed-methods study compared the oral fluency of well-educated adult immigrants from Mandarin and Slavic language backgrounds (16 per group) enrolled in introductory English as a second language (ESL) classes. Speech samples were collected over a 2-year period, together with estimates of weekly English use. We also conducted…

  16. MIXOR: a computer program for mixed-effects ordinal regression analysis.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-03-01

    MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.

  17. Warm Homes, Healthy People Fund 2011/12: a mixed methods evaluation.

    PubMed

    Madden, V; Carmichael, C; Petrokofsky, C; Murray, V

    2014-03-01

    To assess how the Warm Homes Healthy People Fund 2011/12 was used by English local authorities and their partners to tackle excess winter mortality. Mixed-methods evaluation. Three sources of data were used: an online survey to local authority leads, document analysis of local evaluation reports and telephone interviews of local leads. These were analysed to provide numerical estimates, key themes and case studies. There was universal approval of the fund, with all survey respondents requesting the fund to continue. An estimated 130,000 to 200,000 people in England (62% of them elderly) received a wide range of interventions, including structural interventions (such as loft insulation), provision of warm goods and income maximization. Raising awareness was another component, with all survey respondents launching a local media campaign. Strong local partnerships helped to facilitate the implementation of projects. The speed of delivery may have resulted in less strategic targeting of the most vulnerable residents. The Fund was popular and achieved much in winter 2011/2012, although its impact on cold-related morbidity and mortality is unknown. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  18. Asbestos exposure--quantitative assessment of risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, J.M.; Weill, H.

    Methods for deriving quantitative estimates of asbestos-associated health risks are reviewed and their numerous assumptions and uncertainties described. These methods involve extrapolation of risks observed at past relatively high asbestos concentration levels down to usually much lower concentration levels of interest today--in some cases, orders of magnitude lower. These models are used to calculate estimates of the potential risk to workers manufacturing asbestos products and to students enrolled in schools containing asbestos products. The potential risk to workers exposed for 40 yr to 0.5 fibers per milliliter (f/ml) of mixed asbestos fiber type (a permissible workplace exposure limit under considerationmore » by the Occupational Safety and Health Administration (OSHA) ) are estimated as 82 lifetime excess cancers per 10,000 exposed. The risk to students exposed to an average asbestos concentration of 0.001 f/ml of mixed asbestos fiber types for an average enrollment period of 6 school years is estimated as 5 lifetime excess cancers per one million exposed. If the school exposure is to chrysotile asbestos only, then the estimated risk is 1.5 lifetime excess cancers per million. Risks from other causes are presented for comparison; e.g., annual rates (per million) of 10 deaths from high school football, 14 from bicycling (10-14 yr of age), 5 to 20 for whooping cough vaccination. Decisions concerning asbestos products require participation of all parties involved and should only be made after a scientifically defensible estimate of the associated risk has been obtained. In many cases to date, such decisions have been made without adequate consideration of the level of risk or the cost-effectiveness of attempts to lower the potential risk. 73 references.« less

  19. Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition

    NASA Astrophysics Data System (ADS)

    Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.

    2016-12-01

    Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.

  20. Nonlinear mixed effects dose response modeling in high throughput drug screens: application to melanoma cell line analysis.

    PubMed

    Ding, Kuan-Fu; Petricoin, Emanuel F; Finlay, Darren; Yin, Hongwei; Hendricks, William P D; Sereduk, Chris; Kiefer, Jeffrey; Sekulic, Aleksandar; LoRusso, Patricia M; Vuori, Kristiina; Trent, Jeffrey M; Schork, Nicholas J

    2018-01-12

    Cancer cell lines are often used in high throughput drug screens (HTS) to explore the relationship between cell line characteristics and responsiveness to different therapies. Many current analysis methods infer relationships by focusing on one aspect of cell line drug-specific dose-response curves (DRCs), the concentration causing 50% inhibition of a phenotypic endpoint (IC 50 ). Such methods may overlook DRC features and do not simultaneously leverage information about drug response patterns across cell lines, potentially increasing false positive and negative rates in drug response associations. We consider the application of two methods, each rooted in nonlinear mixed effects (NLME) models, that test the relationship relationships between estimated cell line DRCs and factors that might mitigate response. Both methods leverage estimation and testing techniques that consider the simultaneous analysis of different cell lines to draw inferences about any one cell line. One of the methods is designed to provide an omnibus test of the differences between cell line DRCs that is not focused on any one aspect of the DRC (such as the IC 50 value). We simulated different settings and compared the different methods on the simulated data. We also compared the proposed methods against traditional IC 50 -based methods using 40 melanoma cell lines whose transcriptomes, proteomes, and, importantly, BRAF and related mutation profiles were available. Ultimately, we find that the NLME-based methods are more robust, powerful and, for the omnibus test, more flexible, than traditional methods. Their application to the melanoma cell lines reveals insights into factors that may be clinically useful.

  1. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  2. Number needed to benefit from information (NNBI): proposal from a mixed methods research study with practicing family physicians.

    PubMed

    Pluye, Pierre; Grad, Roland M; Johnson-Lafleur, Janique; Granikov, Vera; Shulha, Michael; Marlow, Bernard; Ricarte, Ivan Luiz Marques

    2013-01-01

    We wanted to describe family physicians' use of information from an electronic knowledge resource for answering clinical questions, and their perception of subsequent patient health outcomes; and to estimate the number needed to benefit from information (NNBI), defined as the number of patients for whom clinical information was retrieved for 1 to benefit. We undertook a mixed methods research study, combining quantitative longitudinal and qualitative research studies. Participants were 41 family physicians from primary care clinics across Canada. Physicians were given access to 1 electronic knowledge resource on handheld computer in 2008-2009. For the outcome assessment, participants rated their searches using a validated method. Rated searches were examined during interviews guided by log reports that included ratings. Cases were defined as clearly described searches where clinical information was used for a specific patient. For each case, interviewees described information-related patient health outcomes. For the mixed methods data analysis, quantitative and qualitative data were merged into clinical vignettes (each vignette describing a case). We then estimated the NNBI. In 715 of 1,193 searches for information conducted during an average of 86 days, the search objective was directly linked to a patient. Of those searches, 188 were considered to be cases. In 53 cases, participants associated the use of information with at least 1 patient health benefit. This finding suggested an NNBI of 14 (715/53). The NNBI may be used in further experimental research to compare electronic knowledge resources. A low NNBI can encourage clinicians to search for information more frequently. If all searches had benefits, the NNBI would be 1. In addition to patient benefits, learning and knowledge reinforcement outcomes are frequently reported.

  3. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.

  4. A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy

    USGS Publications Warehouse

    Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,

    2010-01-01

    A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.

  5. Dual-tracer method to estimate coral reef response to a plume of chemically modified seawater

    NASA Astrophysics Data System (ADS)

    Maclaren, J. K.; Caldeira, K.

    2013-12-01

    We present a new method, based on measurement of seawater samples, to estimate the response of a reef ecosystem to a plume of an additive (for example, a nutrient or other chemical). In the natural environment, where there may be natural variability in concentrations, it can be difficult to distinguish between changes in concentrations that would occur naturally and changes in concentrations that result from a chemical addition. Furthermore, in the unconfined natural environment, chemically modified water can mix with waters that have not been modified, making it difficult to distinguish between effects of dilution and effects of chemical fluxes or transformations. We present a dual-tracer method that extracts signals from observations that may be affected by both natural variability and dilution. In this dual-tracer method, a substance (in our example case, alkalinity) is added to the water in known proportion to a passive conservative tracer (in our example case, Rhodamine WT dye). The resulting plume of seawater is allowed to flow over the study site. Two transects are drawn across the plume at the front and back of the study site. If, in our example, alkalinity is plotted as a function of dye concentration for the front transect, the slope of the resulting mixing line is the ratio of alkalinity to dye in the added fluid. If a similar mixing line is measured and calculated for the back transect, the slope of this mixing line will indicate the amount of added alkalinity that remains in the water flowing out of the study site per unit of added dye. The ratio of the front and back slopes indicates the fraction of added alkalinity that was taken up by the reef. The method is demonstrated in an experiment performed on One Tree Reef (Queensland, Australia) aimed at showing that ocean acidification is already affecting coral reef growth. In an effort to chemically reverse some of the changes to seawater chemistry that have occurred over the past 200 years, we added sodium hydroxide to increase alkalinity in the plume and controlled for dilution with Rhodamine WT dye. Preliminary data will be presented and analyzed using the approach described above.

  6. A new method for estimating the usual intake of episodically-consumed foods with application to their distribution

    PubMed Central

    Midthune, Douglas; Dodd, Kevin W.; Freedman, Laurence S.; Krebs-Smith, Susan M.; Subar, Amy F.; Guenther, Patricia M.; Carroll, Raymond J.; Kipnis, Victor

    2007-01-01

    Objective We propose a new statistical method that uses information from two 24-hour recalls (24HRs) to estimate usual intake of episodically-consumed foods. Statistical Analyses Performed The method developed at the National Cancer Institute (NCI) accommodates the large number of non-consumption days that arise with foods by separating the probability of consumption from the consumption-day amount, using a two-part model. Covariates, such as sex, age, race, or information from a food frequency questionnaire (FFQ), may supplement the information from two or more 24HRs using correlated mixed model regression. The model allows for correlation between the probability of consuming a food on a single day and the consumption-day amount. Percentiles of the distribution of usual intake are computed from the estimated model parameters. Results The Eating at America's Table Study (EATS) data are used to illustrate the method to estimate the distribution of usual intake for whole grains and dark green vegetables for men and women and the distribution of usual intakes of whole grains by educational level among men. A simulation study indicates that the NCI method leads to substantial improvement over existing methods for estimating the distribution of usual intake of foods. Applications/Conclusions The NCI method provides distinct advantages over previously proposed methods by accounting for the correlation between probability of consumption and amount consumed and by incorporating covariate information. Researchers interested in estimating the distribution of usual intakes of foods for a population or subpopulation are advised to work with a statistician and incorporate the NCI method in analyses. PMID:17000190

  7. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  8. Rapid Contraceptive Uptake and Changing Method Mix With High Use of Long-Acting Reversible Contraceptives in Crisis-Affected Populations in Chad and the Democratic Republic of the Congo.

    PubMed

    Rattan, Jesse; Noznesky, Elizabeth; Curry, Dora Ward; Galavotti, Christine; Hwang, Shuyuan; Rodriguez, Mariela

    2016-08-11

    The global health community has recognized that expanding the contraceptive method mix is a programmatic imperative since (1) one-third of unintended pregnancies are due to method failure or discontinuation, and (2) the addition of a new method to the existing mix tends to increase total contraceptive use. Since July 2011, CARE has been implementing the Supporting Access to Family Planning and Post-Abortion Care (SAFPAC) initiative to increase the availability, quality, and use of contraception, with a particular focus on highly effective and long-acting reversible methods-intrauterine devices (IUDs) and implants-in crisis-affected settings in Chad and the Democratic Republic of the Congo (DRC). This initiative supports government health systems at primary and referral levels to provide a wide range of contraceptive services to people affected by conflict and/or displacement. Before the initiative, long-acting reversible methods were either unknown or unavailable in the intervention areas. However, as soon as trained providers were in place, we noted a dramatic and sustained increase in new users of all contraceptive methods, especially implants, with total new clients reaching 82,855, or 32% of the estimated number of women of reproductive age in the respective catchment areas in both countries, at the end of the fourth year. Demand for implants was very strong in the first 6 months after provider training. During this time, implants consistently accounted for more than 50% of the method mix, reaching as high as 89% in Chad and 74% in DRC. To ensure that all clients were getting the contraceptive method of their choice, we conducted a series of discussions and sought feedback from different stakeholders in order to modify program strategies. Key program modifications included more focused communication in mass media, community, and interpersonal channels about the benefits of IUDs while reinforcing the wide range of methods available and refresher training for providers on how to insert IUDs to strengthen their competence and confidence. Over time, we noted a gradual redistribution of the method mix in parallel with vigorous continued family planning uptake. This experience suggests that analyzing method mix can be helpful for designing program strategies and that expanding method choice can accelerate satisfying demand, especially in environments with high unmet need for contraception. © Rattan et al.

  9. Baseline Estimation and Outlier Identification for Halocarbons

    NASA Astrophysics Data System (ADS)

    Wang, D.; Schuck, T.; Engel, A.; Gallman, F.

    2017-12-01

    The aim of this paper is to build a baseline model for halocarbons and to statistically identify the outliers under specific conditions. In this paper, time series of regional CFC-11 and Chloromethane measurements was discussed, which taken over the last 4 years at two locations, including a monitoring station at northwest of Frankfurt am Main (Germany) and Mace Head station (Ireland). In addition to analyzing time series of CFC-11 and Chloromethane, more importantly, a statistical approach of outlier identification is also introduced in this paper in order to make a better estimation of baseline. A second-order polynomial plus harmonics are fitted to CFC-11 and chloromethane mixing ratios data. Measurements with large distance to the fitting curve are regard as outliers and flagged. Under specific requirement, the routine is iteratively adopted without the flagged measurements until no additional outliers are found. Both model fitting and the proposed outlier identification method are realized with the help of a programming language, Python. During the period, CFC-11 shows a gradual downward trend. And there is a slightly upward trend in the mixing ratios of Chloromethane. The concentration of chloromethane also has a strong seasonal variation, mostly due to the seasonal cycle of OH. The usage of this statistical method has a considerable effect on the results. This method efficiently identifies a series of outliers according to the standard deviation requirements. After removing the outliers, the fitting curves and trend estimates are more reliable.

  10. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.

  11. Effects of phylogenetic reconstruction method on the robustness of species delimitation using single-locus data

    PubMed Central

    Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel

    2014-01-01

    Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses. PMID:25821577

  12. Enhancing the chemiluminescence intensity of a KMnO4 formaldehyde system for estimating the total phenolic content in honey samples using a novel nanodroplet mixing approach in a microfluidics platform.

    PubMed

    Al Lawati, Haider A J; Al Mughairy, Baqia; Al Lawati, Iman; Suliman, FakhrEldin O

    2018-04-30

    A novel mixing approach was utilized with a highly sensitive chemiluminescence (CL) method to determine the total phenolic content (TPC) in honey samples using an acidic potassium permanganate-formaldehyde system. The mixing approach was based on exploiting the mixing efficiency of nanodroplets generated in a microfluidic platform. Careful optimization of the instrument setup and various experimental conditions were employed to obtain excellent sensitivity. The mixing efficiency of the droplets was compared with the CL signal intensity obtained using the common serpentine chip design, with both approaches using at a total flow rate of 15 μl min -1 ; the results showed that the nanodroplets provided 600% higher CL signal intensity at this low flow rate. Using the optimum conditions, calibration equations, limits of detection (LOD) and limits of quantification (LOQ) for gallic acid (GA), caffeic acid (CA), kaempferol (KAM), quercetin (QRC) and catechin (CAT) were obtained. The LOD ranged from 6.2 ppb for CA to 11.0 ppb for QRC. Finally, the method was applied for the determination of TPC in several local and commercial honey samples. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  14. Experimental methods in aquatic respirometry: the importance of mixing devices and accounting for background respiration.

    PubMed

    Rodgers, G G; Tenzing, P; Clark, T D

    2016-01-01

    In light of an increasing trend in fish biology towards using static respirometry techniques without the inclusion of a mixing mechanism and without accurately accounting for the influence of microbial (background) respiration, this paper quantifies the effect of these approaches on the oxygen consumption rates (ṀO2 ) measured from juvenile barramundi Lates calcarifer (mean ± s.e. mass = 20·31 ± 0·81 g) and adult spiny chromis damselfish Acanthochromis polyacanthus (22·03 ± 2·53 g). Background respiration changed consistently and in a sigmoidal manner over time in the treatment with a mixing device (inline recirculation pump), whereas attempts to measure background respiration in the non-mixed treatment yielded highly variable estimates of ṀO2 that were probably artefacts due to the lack of water movement over the oxygen sensor during measurement periods. This had clear consequences when accounting for background respiration in the calculations of fish ṀO2 . Exclusion of a mixing device caused a significantly lower estimate of ṀO2 in both species and reduced the capacity to detect differences between individuals as well as differences within an individual over time. There was evidence to suggest that the magnitude of these effects was dependent on the spontaneous activity levels of the fish, as the difference between mixed and non-mixed treatments was more pronounced for L. calcarifer (sedentary) than for A. polyacanthus (more spontaneously active). It is clear that respirometry set-ups for sedentary species must contain a mixing device to prevent oxygen stratification inside the respirometer. While more active species may provide a higher level of water mixing during respirometry measurements and theoretically reduce the need for a mixing device, the level of mixing cannot be quantified and may change with diurnal cycles in activity. To ensure consistency across studies without relying on fish activity levels, and to enable accurate assessments of background respiration, it is recommended that all respirometry systems should include an appropriate mixing device. © 2016 The Fisheries Society of the British Isles.

  15. Kurtosis Approach for Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.

  16. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  17. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  18. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  19. Estimation of social value of statistical life using willingness-to-pay method in Nanjing, China.

    PubMed

    Yang, Zhao; Liu, Pan; Xu, Xin

    2016-10-01

    Rational decision making regarding the safety related investment programs greatly depends on the economic valuation of traffic crashes. The primary objective of this study was to estimate the social value of statistical life in the city of Nanjing in China. A stated preference survey was conducted to investigate travelers' willingness to pay for traffic risk reduction. Face-to-face interviews were conducted at stations, shopping centers, schools, and parks in different districts in the urban area of Nanjing. The respondents were categorized into two groups, including motorists and non-motorists. Both the binary logit model and mixed logit model were developed for the two groups of people. The results revealed that the mixed logit model is superior to the fixed coefficient binary logit model. The factors that significantly affect people's willingness to pay for risk reduction include income, education, gender, age, drive age (for motorists), occupation, whether the charged fees were used to improve private vehicle equipment (for motorists), reduction in fatality rate, and change in travel cost. The Monte Carlo simulation method was used to generate the distribution of value of statistical life (VSL). Based on the mixed logit model, the VSL had a mean value of 3,729,493 RMB ($586,610) with a standard deviation of 2,181,592 RMB ($343,142) for motorists; and a mean of 3,281,283 RMB ($505,318) with a standard deviation of 2,376,975 RMB ($366,054) for non-motorists. Using the tax system to illustrate the contribution of different income groups to social funds, the social value of statistical life was estimated. The average social value of statistical life was found to be 7,184,406 RMB ($1,130,032). Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Retrieval of average CO2 fluxes by combining in situ CO2 measurements and backscatter lidar information

    NASA Astrophysics Data System (ADS)

    Gibert, Fabien; Schmidt, Martina; Cuesta, Juan; Ciais, Philippe; Ramonet, Michel; Xueref, IrèNe; Larmanou, Eric; Flamant, Pierre Henri

    2007-05-01

    The present paper deals with a boundary layer budgeting method which makes use of observations from various in situ and remote sensing instruments to infer regional average net ecosystem exchange (NEE) of CO2. Measurements of CO2 within and above the atmospheric boundary layer (ABL) by in situ sensors, in conjunction with a precise knowledge of the change in ABL height by lidar and radiosoundings, enable to infer diurnal and seasonal NEE variations. Near-ground in situ CO measurements are used to discriminate natural and anthropogenic contributions of CO2 diurnal variations in the ABL. The method yields mean NEE that amounts to 5 μmol m-2 s-1 during the night and -20 μmol m-2 s-1 in the middle of the day between May and July. A good agreement is found with the expected NEE accounting for a mixed wheat field and forest area during winter season, representative of the mesoscale ecosystems in the Paris area according to the trajectory of an air column crossing the landscape. Daytime NEE is seen to follow the vegetation growth and the change in the ratio diffuse/direct radiation. The CO2 vertical mixing flux during the rise of the atmospheric boundary layer is also estimated and seems to be the main cause of the large decrease of CO2 mixing ratio in the morning. The outcomes on CO2 flux estimate are compared to eddy-covariance measurements on a barley field. The importance of various sources of error and uncertainty on the retrieval is discussed. These errors are estimated to be less than 15%; the main error resulted from anthropogenic emissions.

  1. Quantification of proportions of different water sources in a mining operation.

    PubMed

    Scheiber, Laura; Ayora, Carlos; Vázquez-Suñé, Enric

    2018-04-01

    The water drained in mining operations (galleries, shafts, open pits) usually comes from different sources. Evaluating the contribution of these sources is very often necessary for water management. To determine mixing ratios, a conventional mass balance is often used. However, the presence of more than two sources creates uncertainties in mass balance applications. Moreover, the composition of the end-members is not commonly known with certainty and/or can vary in space and time. In this paper, we propose a powerful tool for solving such problems and managing groundwater in mining sites based on multivariate statistical analysis. This approach was applied to the Cobre Las Cruces mining complex, the largest copper mine in Europe. There, the open pit water is a mixture of three end-members: runoff (RO), basal Miocene (Mb) and Paleozoic (PZ) groundwater. The volume of water drained from the Miocene base aquifer must be determined and compensated via artificial recharging to comply with current regulations. Through multivariate statistical analysis of samples from a regional field campaign, the compositions of PZ and Mb end-members were firstly estimated, and then used for mixing calculations at the open pit scale. The runoff end-member was directly determined from samples collected in interception trenches inside the open pit. The application of multivariate statistical methods allowed the estimation of mixing ratios for the hydrological years 2014-2015 and 2015-2016. Open pit water proportions have changed from 15% to 7%, 41% to 36%, and 44% to 57% for runoff, Mb and PZ end-members, respectively. An independent estimation of runoff based on the curve method yielded comparable results. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A tutorial on Bayesian bivariate meta-analysis of mixed binary-continuous outcomes with missing treatment effects.

    PubMed

    Gajic-Veljanoski, Olga; Cheung, Angela M; Bayoumi, Ahmed M; Tomlinson, George

    2016-05-30

    Bivariate random-effects meta-analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random-effects meta-analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous-binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step-by-step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta-analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate-level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was 'partially' complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Compatible estimators of the components of change for a rotating panel forest inventory design

    Treesearch

    Francis A. Roesch

    2007-01-01

    This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...

  4. Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model

    PubMed Central

    Hopkins, John B.; Ferguson, Jake M.

    2012-01-01

    Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246

  5. Reliability and Validity in Hospital Case-Mix Measurement

    PubMed Central

    Pettengill, Julian; Vertrees, James

    1982-01-01

    There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909

  6. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  7. Rapid Contraceptive Uptake and Changing Method Mix With High Use of Long-Acting Reversible Contraceptives in Crisis-Affected Populations in Chad and the Democratic Republic of the Congo

    PubMed Central

    Rattan, Jesse; Noznesky, Elizabeth; Curry, Dora Ward; Galavotti, Christine; Hwang, Shuyuan; Rodriguez, Mariela

    2016-01-01

    ABSTRACT The global health community has recognized that expanding the contraceptive method mix is a programmatic imperative since (1) one-third of unintended pregnancies are due to method failure or discontinuation, and (2) the addition of a new method to the existing mix tends to increase total contraceptive use. Since July 2011, CARE has been implementing the Supporting Access to Family Planning and Post-Abortion Care (SAFPAC) initiative to increase the availability, quality, and use of contraception, with a particular focus on highly effective and long-acting reversible methods—intrauterine devices (IUDs) and implants—in crisis-affected settings in Chad and the Democratic Republic of the Congo (DRC). This initiative supports government health systems at primary and referral levels to provide a wide range of contraceptive services to people affected by conflict and/or displacement. Before the initiative, long-acting reversible methods were either unknown or unavailable in the intervention areas. However, as soon as trained providers were in place, we noted a dramatic and sustained increase in new users of all contraceptive methods, especially implants, with total new clients reaching 82,855, or 32% of the estimated number of women of reproductive age in the respective catchment areas in both countries, at the end of the fourth year. Demand for implants was very strong in the first 6 months after provider training. During this time, implants consistently accounted for more than 50% of the method mix, reaching as high as 89% in Chad and 74% in DRC. To ensure that all clients were getting the contraceptive method of their choice, we conducted a series of discussions and sought feedback from different stakeholders in order to modify program strategies. Key program modifications included more focused communication in mass media, community, and interpersonal channels about the benefits of IUDs while reinforcing the wide range of methods available and refresher training for providers on how to insert IUDs to strengthen their competence and confidence. Over time, we noted a gradual redistribution of the method mix in parallel with vigorous continued family planning uptake. This experience suggests that analyzing method mix can be helpful for designing program strategies and that expanding method choice can accelerate satisfying demand, especially in environments with high unmet need for contraception. PMID:27540125

  8. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    PubMed Central

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  9. Flood Extent Mapping Using Dual-Polarimetric SENTINEL-1 Synthetic Aperture Radar Imagery

    NASA Astrophysics Data System (ADS)

    Jo, M.-J.; Osmanoglu, B.; Zhang, B.; Wdowinski, S.

    2018-04-01

    Rapid generation of synthetic aperture radar (SAR) based flood extent maps provide valuable data in disaster response efforts thanks to the cloud penetrating ability of microwaves. We present a method using dual-polarimetric SAR imagery acquired on Sentinel-1a/b satellites. A false-colour map is generated using pre- and post- disaster imagery, allowing operators to distinguish between existing standing water pre-flooding, and recently flooded areas. The method works best in areas of standing water and provides mixed results in urban areas. A flood depth map is also estimated by using an external DEM. We will present the methodology, it's estimated accuracy as well as investigations into improving the response in urban areas.

  10. Multiple Flux Footprints, Flux Divergences and Boundary Layer Mixing Ratios: Studies of Ecosystem-Atmosphere CO2 Exchange Using the WLEF Tall Tower.

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.

    2001-05-01

    Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.

  11. Functional Nonlinear Mixed Effects Models For Longitudinal Image Data

    PubMed Central

    Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu

    2015-01-01

    Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453

  12. Selection of latent variables for multiple mixed-outcome models

    PubMed Central

    ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI

    2014-01-01

    Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219

  13. Thermal diffusivity and adiabatic limit temperature characterization of consolidate granular expanded perlite using the flash method

    NASA Astrophysics Data System (ADS)

    Raefat, Saad; Garoum, Mohammed; Laaroussi, Najma; Thiam, Macodou; Amarray, Khaoula

    2017-07-01

    In this work experimental investigation of apparent thermal diffusivity and adiabatic limit temperature of expanded granular perlite mixes has been made using the flash technic. Perlite granulates were sieved to produce essentially three characteristic grain sizes. The consolidated samples were manufactured by mixing controlled proportions of the plaster and water. The effect of the particle size on the diffusivity was examined. The inverse estimation of the diffusivity and the adiabatic limit temperature at the rear face as well as the heat losses coefficients were performed using several numerical global minimization procedures. The function to be minimized is the quadratic distance between the experimental temperature rise at the rear face and the analytical model derived from the one dimension heat conduction. It is shown that, for all granulometry tested, the estimated parameters lead to a good agreement between the mathematical model and experimental data.

  14. A hybridized formulation for the weak Galerkin mixed finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less

  15. A hybridized formulation for the weak Galerkin mixed finite element method

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2016-01-14

    This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less

  16. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  17. Generalized Full-Information Item Bifactor Analysis

    PubMed Central

    Cai, Li; Yang, Ji Seung; Hansen, Mark

    2011-01-01

    Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of multidimensional item response theory models for an arbitrary mixing of dichotomous, ordinal, and nominal items. The extended item bifactor model also enables the estimation of latent variable means and variances when data from more than one group are present. Generalized user-defined parameter restrictions are permitted within or across groups. We derive an efficient full-information maximum marginal likelihood estimator. Our estimation method achieves substantial computational savings by extending Gibbons and Hedeker’s (1992) bifactor dimension reduction method so that the optimization of the marginal log-likelihood only requires two-dimensional integration regardless of the dimensionality of the latent variables. We use simulation studies to demonstrate the flexibility and accuracy of the proposed methods. We apply the model to study cross-country differences, including differential item functioning, using data from a large international education survey on mathematics literacy. PMID:21534682

  18. Small area estimation for estimating the number of infant mortality in West Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Anggreyani, Arie; Indahwati, Kurnia, Anang

    2016-02-01

    Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.

  19. Description of the atomic disorder (local order) in crystals by the mixed-symmetry method

    NASA Astrophysics Data System (ADS)

    Dudka, A. P.; Novikova, N. E.

    2017-11-01

    An approach to the description of local atomic disorder (short-range order) in single crystals by the mixed-symmetry method based on Bragg scattering data is proposed, and the corresponding software is developed. In defect-containing crystals, each atom in the unit cell can be described by its own symmetry space group. The expression for the calculated structural factor includes summation over different sets of symmetry operations for different atoms. To facilitate the search for new symmetry elements, an "atomic disorder expert" was developed, which estimates the significance of tested models. It is shown that the symmetry lowering for some atoms correlates with the existence of phase transitions (in langasite family crystals) and the anisotropy of physical properties (in rare-earth dodecaborides RB12).

  20. Simultaneous chromatic dispersion monitoring and optical modulation format identification utilizing four wave mixing

    NASA Astrophysics Data System (ADS)

    Cui, Sheng; Qiu, Chen; Ke, Changjian; He, Sheng; Liu, Deming

    2015-11-01

    This paper presents a method which is able to monitor the chromatic dispersion (CD) and identify the modulation format (MF) of optical signals simultaneously. This method utilizes the features of the output curve of the highly sensitive all-optical CD monitor based on four wave mixing (FWM). From the symmetric center of the curve CD can be estimated blindly and independently, while from the profile and convergence region of the curve ten commonly used modulation formats can be recognized with simple algorithm based on maximum correlation classifier. This technique does not need any high speed optoelectronics and has no limitation on signal rate. Furthermore it can tolerate large CD distortions and is robust to polarization mode dispersion (PMD) and amplified spontaneous emission (ASE) noise.

  1. Assembly and analysis of fragmentation data for liquid propellant vessels

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Parr, V. B.; Bessey, R. L.; Cox, P. A.

    1974-01-01

    Fragmentation data was assembled and analyzed for exploding liquid propellant vessels. These data were to be retrieved from reports of tests and accidents, including measurements or estimates of blast yield, etc. A significant amount of data was retrieved from a series of tests conducted for measurement of blast and fireball effects of liquid propellant explosions (Project PYRO), a few well-documented accident reports, and a series of tests to determine auto-ignition properties of mixing liquid propellants. The data were reduced and fitted to various statistical functions. Comparisons were made with methods of prediction for blast yield, initial fragment velocities, and fragment range. Reasonably good correlation was achieved. Methods presented in the report allow prediction of fragment patterns, given type and quantity of propellant, type of accident, and time of propellant mixing.

  2. Do case-mix adjusted nursing home reimbursements actually reflect costs? Minnesota's experience.

    PubMed

    Nyman, J A; Connor, R A

    1994-07-01

    Some states have adopted Medicaid reimbursement systems that pay nursing homes according to patient type. These case-mix adjusted reimbursements are intended in part to eliminate the incentive in prospective systems to exclude less profitable patients. This study estimates the marginal costs of different patient types under Minnesota's case-mix system and compares them to their corresponding reimbursements. We find that estimated costs do not match reimbursement rates, again making some patient types less profitable than others. Further, in confirmation of our estimates, we find that the percentage change in patient days between 1986 and 1990 is explained by our profitability estimates.

  3. Empirical Bayes Gaussian likelihood estimation of exposure distributions from pooled samples in human biomonitoring.

    PubMed

    Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng

    2014-12-10

    Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Environmental effects of interstate power trading on electricity consumption mixes.

    PubMed

    Marriott, Joe; Matthews, H Scott

    2005-11-15

    Although many studies of electricity generation use national or state average generation mix assumptions, in reality a great deal of electricity is transferred between states with very different mixes of fossil and renewable fuels, and using the average numbers could result in incorrect conclusions in these studies. We create electricity consumption profiles for each state and for key industry sectors in the U.S. based on existing state generation profiles, net state power imports, industry presence by state, and an optimization model to estimate interstate electricity trading. Using these "consumption mixes" can provide a more accurate assessment of electricity use in life-cycle analyses. We conclude that the published generation mixes for states that import power are misleading, since the power consumed in-state has a different makeup than the power that was generated. And, while most industry sectors have consumption mixes similar to the U.S. average, some of the most critical sectors of the economy--such as resource extraction and material processing sectors--are very different. This result does validate the average mix assumption made in many environmental assessments, but it is important to accurately quantify the generation methods for electricity used when doing life-cycle analyses.

  5. Spatial regression methods capture prediction uncertainty in species distribution model projections through time

    Treesearch

    Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz

    2013-01-01

    The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...

  6. Sample selection in foreign similarity regions for multicrop experiments

    NASA Technical Reports Server (NTRS)

    Malin, J. T. (Principal Investigator)

    1981-01-01

    The selection of sample segments in the U.S. foreign similarity regions for development of proportion estimation procedures and error modeling for Argentina, Australia, Brazil, and USSR in AgRISTARS is described. Each sample was chosen to be similar in crop mix to the corresponding indicator region sample. Data sets, methods of selection, and resulting samples are discussed.

  7. Estimating Daily Evapotranspiration Based on A Model of Evapotranspiration Fraction (EF) for Mixed Pixels

    NASA Astrophysics Data System (ADS)

    Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.

    2017-12-01

    Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1.18MJ/m², which is a quite significant enhancement.The model is easy to apply. And the moduler of inhomogeneous surfaces is independent and easy to be embedded in the traditional remote sensing algorithms of heat fluxes to get daily ET, which were mainly designed to calculate LE or ET under unsaturated conditions and did not consider heterogeneities of land surface.

  8. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    PubMed

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  9. Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.

    PubMed

    McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark

    2018-07-01

    To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Mixed group validation: a method to address the limitations of criterion group validation in research on malingering detection.

    PubMed

    Frederick, R I

    2000-01-01

    Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.

  11. Estimation of Rank Correlation for Clustered Data

    PubMed Central

    Rosner, Bernard; Glynn, Robert

    2017-01-01

    It is well known that the sample correlation coefficient (Rxy) is the maximum likelihood estimator (MLE) of the Pearson correlation (ρxy) for i.i.d. bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the MLE of ρxy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (a) converting ranks of both X and Y to the probit scale, (b) estimating the Pearson correlation between probit scores for X and Y, and (c) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. PMID:28399615

  12. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    PubMed

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  13. Application of Mixed Effects Limits of Agreement in the Presence of Multiple Sources of Variability: Exemplar from the Comparison of Several Devices to Measure Respiratory Rate in COPD Patients

    PubMed Central

    Weir, Christopher J.; Rubio, Noah; Rabinovich, Roberto; Pinnock, Hilary; Hanley, Janet; McCloughan, Lucy; Drost, Ellen M.; Mantoani, Leandro C.; MacNee, William; McKinstry, Brian

    2016-01-01

    Introduction The Bland-Altman limits of agreement method is widely used to assess how well the measurements produced by two raters, devices or systems agree with each other. However, mixed effects versions of the method which take into account multiple sources of variability are less well described in the literature. We address the practical challenges of applying mixed effects limits of agreement to the comparison of several devices to measure respiratory rate in patients with chronic obstructive pulmonary disease (COPD). Methods Respiratory rate was measured in 21 people with a range of severity of COPD. Participants were asked to perform eleven different activities representative of daily life during a laboratory-based standardised protocol of 57 minutes. A mixed effects limits of agreement method was used to assess the agreement of five commercially available monitors (Camera, Photoplethysmography (PPG), Impedance, Accelerometer, and Chest-band) with the current gold standard device for measuring respiratory rate. Results Results produced using mixed effects limits of agreement were compared to results from a fixed effects method based on analysis of variance (ANOVA) and were found to be similar. The Accelerometer and Chest-band devices produced the narrowest limits of agreement (-8.63 to 4.27 and -9.99 to 6.80 respectively) with mean bias -2.18 and -1.60 breaths per minute. These devices also had the lowest within-participant and overall standard deviations (3.23 and 3.29 for Accelerometer and 4.17 and 4.28 for Chest-band respectively). Conclusions The mixed effects limits of agreement analysis enabled us to answer the question of which devices showed the strongest agreement with the gold standard device with respect to measuring respiratory rates. In particular, the estimated within-participant and overall standard deviations of the differences, which are easily obtainable from the mixed effects model results, gave a clear indication that the Accelerometer and Chest-band devices performed best. PMID:27973556

  14. Estimation of rank correlation for clustered data.

    PubMed

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. A Calibration of the MeteoSwiss RAman Lidar for Meteorological Observations (RALMO)Water Vapour Mixing Ratio Measurements using a Radiosonde Trajectory Method

    NASA Astrophysics Data System (ADS)

    Hicks-Jalali, Shannon; Sica, R. J.; Haefele, Alexander; Martucci, Giovanni

    2018-04-01

    With only 50% downtime from 2007-2016, the RALMO lidar in Payerne, Switzerland, has one of the largest continuous lidar data sets available. These measurements will be used to produce an extensive lidar water vapour climatology using the Optimal Estimation Method introduced by Sica and Haefele (2016). We will compare our improved technique for external calibration using radiosonde trajectories with the standard external methods, and present the evolution of the lidar constant from 2007 to 2016.

  16. Cobble cam: Grain-size measurements of sand to boulder from digital photographs and autocorrelation analyses

    USGS Publications Warehouse

    Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.

    2009-01-01

    A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.

  17. Technical note: Use of a digital and an optical Brix refractometer to estimate total solids in milk replacer solutions for calves.

    PubMed

    Floren, H K; Sischo, W M; Crudo, C; Moore, D A

    2016-09-01

    The Brix refractometer is used on dairy farms and calf ranches for colostrum quality (estimation of IgG concentration), estimation of serum IgG concentration in neonatal calves, and nonsalable milk evaluation of total solids for calf nutrition. Another potential use is to estimate the total solids concentrations of milk replacer mixes as an aid in monitoring feeding consistency. The purpose of this study was to evaluate the use of Brix refractometers to estimate total solids in milk replacer solutions and evaluate different replacer mixes for osmolality. Five different milk replacer powders (2 milk replacers with 28% crude protein and 25% fat and 3 with 22% crude protein and 20% fat) were mixed to achieve total solids concentrations from approximately 5.5 to 18%, for a total of 90 different solutions. Readings from both digital and optical Brix refractometers were compared with total solids. The 2 types of refractometers' readings correlated well with one another. The digital and optical Brix readings were highly correlated with the total solids percentage. A value of 1.08 to 1.47 would need to be added to the Brix reading to estimate the total solids in the milk replacer mixes with the optical and digital refractometers, respectively. Osmolality was correlated with total solids percentage of the mixes, but the relationship was different depending on the type of milk replacer. The Brix refractometer can be beneficial in estimating total solids concentration in milk replacer mixes to help monitor milk replacer feeding consistency. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Phytoplankton production and taxon-specific growth rates in the Costa Rica Dome

    PubMed Central

    Selph, Karen E.; Landry, Michael R.; Taylor, Andrew G.; Gutiérrez-Rodríguez, Andrés; Stukel, Michael R.; Wokuluk, John; Pasulka, Alexis

    2016-01-01

    During summer 2010, we investigated phytoplankton production and growth rates at 19 stations in the eastern tropical Pacific, where winds and strong opposing currents generate the Costa Rica Dome (CRD), an open-ocean upwelling feature. Primary production (14C-incorporation) and group-specific growth and net growth rates (two-treatment seawater dilution method) were estimated from samples incubated in situ at eight depths. Our cruise coincided with a mild El Niño event, and only weak upwelling was observed in the CRD. Nevertheless, the highest phytoplankton abundances were found near the dome center. However, mixed-layer growth rates were lowest in the dome center (∼0.5–0.9 day−1), but higher on the edge of the dome (∼0.9–1.0 day−1) and in adjacent coastal waters (0.9–1.3 day−1). We found good agreement between independent methods to estimate growth rates. Mixed-layer growth rates of Prochlorococcus and Synechococcus were largely balanced by mortality, whereas eukaryotic phytoplankton showed positive net growth (∼0.5–0.6 day−1), that is, growth available to support larger (mesozooplankton) consumer biomass. These are the first group-specific phytoplankton rate estimates in this region, and they demonstrate that integrated primary production is high, exceeding 1 g C m−2 day−1 on average, even during a period of reduced upwelling. PMID:27275025

  19. Phytoplankton production and taxon-specific growth rates in the Costa Rica Dome.

    PubMed

    Selph, Karen E; Landry, Michael R; Taylor, Andrew G; Gutiérrez-Rodríguez, Andrés; Stukel, Michael R; Wokuluk, John; Pasulka, Alexis

    2016-03-01

    During summer 2010, we investigated phytoplankton production and growth rates at 19 stations in the eastern tropical Pacific, where winds and strong opposing currents generate the Costa Rica Dome (CRD), an open-ocean upwelling feature. Primary production ( 14 C-incorporation) and group-specific growth and net growth rates (two-treatment seawater dilution method) were estimated from samples incubated in situ at eight depths. Our cruise coincided with a mild El Niño event, and only weak upwelling was observed in the CRD. Nevertheless, the highest phytoplankton abundances were found near the dome center. However, mixed-layer growth rates were lowest in the dome center (∼0.5-0.9 day -1 ), but higher on the edge of the dome (∼0.9-1.0 day -1 ) and in adjacent coastal waters (0.9-1.3 day -1 ). We found good agreement between independent methods to estimate growth rates. Mixed-layer growth rates of Prochlorococcus and Synechococcus were largely balanced by mortality, whereas eukaryotic phytoplankton showed positive net growth (∼0.5-0.6 day -1 ), that is, growth available to support larger (mesozooplankton) consumer biomass. These are the first group-specific phytoplankton rate estimates in this region, and they demonstrate that integrated primary production is high, exceeding 1 g C m -2 day -1 on average, even during a period of reduced upwelling.

  20. Estimates of Social Contact in a Middle School Based on Self-Report and Wireless Sensor Data.

    PubMed

    Leecaster, Molly; Toth, Damon J A; Pettey, Warren B P; Rainey, Jeanette J; Gao, Hongjiang; Uzicanin, Amra; Samore, Matthew

    2016-01-01

    Estimates of contact among children, used for infectious disease transmission models and understanding social patterns, historically rely on self-report logs. Recently, wireless sensor technology has enabled objective measurement of proximal contact and comparison of data from the two methods. These are mostly small-scale studies, and knowledge gaps remain in understanding contact and mixing patterns and also in the advantages and disadvantages of data collection methods. We collected contact data from a middle school, with 7th and 8th grades, for one day using self-report contact logs and wireless sensors. The data were linked for students with unique initials, gender, and grade within the school. This paper presents the results of a comparison of two approaches to characterize school contact networks, wireless proximity sensors and self-report logs. Accounting for incomplete capture and lack of participation, we estimate that "sensor-detectable", proximal contacts longer than 20 seconds during lunch and class-time occurred at 2 fold higher frequency than "self-reportable" talk/touch contacts. Overall, 55% of estimated talk-touch contacts were also sensor-detectable whereas only 15% of estimated sensor-detectable contacts were also talk-touch. Contacts detected by sensors and also in self-report logs had longer mean duration than contacts detected only by sensors (6.3 vs 2.4 minutes). During both lunch and class-time, sensor-detectable contacts demonstrated substantially less gender and grade assortativity than talk-touch contacts. Hallway contacts, which were ascertainable only by proximity sensors, were characterized by extremely high degree and short duration. We conclude that the use of wireless sensors and self-report logs provide complementary insight on in-school mixing patterns and contact frequency.

  1. Estimates of Social Contact in a Middle School Based on Self-Report and Wireless Sensor Data

    PubMed Central

    Leecaster, Molly; Toth, Damon J. A.; Pettey, Warren B. P.; Rainey, Jeanette J.; Gao, Hongjiang; Uzicanin, Amra; Samore, Matthew

    2016-01-01

    Estimates of contact among children, used for infectious disease transmission models and understanding social patterns, historically rely on self-report logs. Recently, wireless sensor technology has enabled objective measurement of proximal contact and comparison of data from the two methods. These are mostly small-scale studies, and knowledge gaps remain in understanding contact and mixing patterns and also in the advantages and disadvantages of data collection methods. We collected contact data from a middle school, with 7th and 8th grades, for one day using self-report contact logs and wireless sensors. The data were linked for students with unique initials, gender, and grade within the school. This paper presents the results of a comparison of two approaches to characterize school contact networks, wireless proximity sensors and self-report logs. Accounting for incomplete capture and lack of participation, we estimate that “sensor-detectable”, proximal contacts longer than 20 seconds during lunch and class-time occurred at 2 fold higher frequency than “self-reportable” talk/touch contacts. Overall, 55% of estimated talk-touch contacts were also sensor-detectable whereas only 15% of estimated sensor-detectable contacts were also talk-touch. Contacts detected by sensors and also in self-report logs had longer mean duration than contacts detected only by sensors (6.3 vs 2.4 minutes). During both lunch and class-time, sensor-detectable contacts demonstrated substantially less gender and grade assortativity than talk-touch contacts. Hallway contacts, which were ascertainable only by proximity sensors, were characterized by extremely high degree and short duration. We conclude that the use of wireless sensors and self-report logs provide complementary insight on in-school mixing patterns and contact frequency. PMID:27100090

  2. Revised (Mixed-Effects) Estimation for Forest Burning Emissions of Gases and Smoke, Fire/Emission Factor Typologies, and Potential Remote Sensing Classification of Types for Use in Ozone and Absorbing-Carbon Simulation

    NASA Astrophysics Data System (ADS)

    Chatfield, R. B.; Segal-Rosenhaimer, M.

    2014-12-01

    We summarize recent progress (a) in correcting biomass burning emissions factors deduced from airborne sampling of forest fire plumes, (b) in understanding the variability in reactivity of the fresh plumes sampled in ARCTAS (2008), DC3 (2012), and SEAC4RS (2013) airborne missions, and (c) in a consequent search for remotely sensed quantities that help classify forest-fire plumes. Particle properties, chemical speciation, and smoke radiative properties are related and mutually informative, as pictures below suggest (slopes of lines of same color are similar). (a) Mixed-effects (random-effects) statistical modeling provides estimates of both emission factors and a reasonable description of carbon-burned simultaneously. Different fire plumes will have very different contributions to volatile organic carbon reactivity; this may help explain differences of free NOx(both gas- and particle-phase), and also of ozone production, that have been noted for forest-fire plumes in California. Our evalualations check or correct emission factors based on sequential measurements (e.g., the Normalized Ratio Enhancement and similar methods). We stress the dangers of methods relying on emission-ratios to CO. (b) This work confirms and extends many reports of great situational variability in emissions factors. VOCs vary in OH reactivity and NOx-binding. Reasons for variability are not only fuel composition, fuel condition, etc, but are confused somewhat by rapid transformation and mixing of emissions. We use "unmixing" (distinct from mixed-effects) statistics and compare briefly to approaches like neural nets. We focus on one particularly intense fire the notorious Yosemite Rim Fire of 2013. In some samples, NOx activity was not so surpressed by binding into nitrates as in other fires. While our fire-typing is evolving and subject to debate, the carbon-burned Δ(CO2+CO) estimates that arise from mixed effects models, free of confusion by background-CO2 variation, should provide a solid base for discussion. (c) We report progress using promising links we find between emissions-related "fire types" and promising features deducible from remote observations of plumes, e.g., single scatter albedo, Ångström exponent of scattering, Ångström exponent of absorption, (CO column density)/(aerosol optical depth).

  3. Revised (Mixed-Effects) Estimation for Forest Burning Emissions of Gases and Smoke, Fire/Emission Factor Typology, and Potential Remote Sensing Classification of Types for Ozone and Black-Carbon Simulation

    NASA Technical Reports Server (NTRS)

    Chatfield, Robert B.; Segal Rozenhaimer, M.

    2014-01-01

    We summarize recent progress (a) in correcting biomass burning emissions factors deduced from airborne sampling of forest fire plumes, (b) in understanding the variability in reactivity of the fresh plumes sampled in ARCTAS (2008), DC3 (2012), and SEAC4RS (2013) airborne missions, and (c) in a consequent search for remotely sensed quantities that help classify forest-fire plumes. Particle properties, chemical speciation, and smoke radiative properties are related and mutually informative, as pictures below suggest (slopes of lines of same color are similar). (a) Mixed-effects (random-effects) statistical modeling provides estimates of both emission factors and a reasonable description of carbon-burned simultaneously. Different fire plumes will have very different contributions to volatile organic carbon reactivity; this may help explain differences of free NOx(both gas- and particle-phase), and also of ozone production, that have been noted for forest-fire plumes in California. Our evaluations check or correct emission factors based on sequential measurements (e.g., the Normalized Ratio Enhancement and similar methods). We stress the dangers of methods relying on emission-ratios to CO. (b) This work confirms and extends many reports of great situational variability in emissions factors. VOCs vary in OH reactivity and NOx-binding. Reasons for variability are not only fuel composition, fuel condition, etc., but are confused somewhat by rapid transformation and mixing of emissions. We use "unmixing" (distinct from mixed-effects) statistics and compare briefly to approaches like neural nets. We focus on one particularly intense fire the notorious Yosemite Rim Fire of 2013. In some samples, NOx activity was not so suppressed by binding into nitrates as in other fires. While our fire-typing is evolving and subject to debate, the carbon-burned delta(CO2+CO) estimates that arise from mixed effects models, free of confusion by background-CO2 variation, should provide a solid base for discussion. (c) We report progress using promising links we find between emissions-related "fire types" and promising features deducible from remote observations of plumes, e.g., single scatter albedo, Angstrom exponent of scattering, Angstrom exponent of absorption, (CO column density)/(aerosol optical depth).

  4. Vegetables and Mixed Dishes Are Top Contributors to Phylloquinone Intake in US Adults: Data from the 2011-2012 NHANES.

    PubMed

    Harshman, Stephanie G; Finnan, Emily G; Barger, Kathryn J; Bailey, Regan L; Haytowitz, David B; Gilhooly, Cheryl H; Booth, Sarah L

    2017-07-01

    Background: Phylloquinone is the most abundant form of vitamin K in US diets. Green vegetables are considered the predominant dietary source of phylloquinone. As our food supply diversifies and expands, the food groups that contribute to phylloquinone intake are also changing, which may change absolute intakes. Thus, it is important to identify the contributors to dietary vitamin K estimates to guide recommendations on intakes and food sources. Objective: The purpose of this study was to estimate 1 ) the amount of phylloquinone consumed in the diet of US adults, 2 ) to estimate the contribution of different food groups to phylloquinone intake in individuals with a high or low vegetable intake (≥2 or <2 cups vegetables/d), and 3 ) to characterize the contribution of different mixed dishes to phylloquinone intake. Methods: Usual phylloquinone intake was determined from NHANES 2011-2012 (≥20 y old; 2092 men and 2214 women) and the National Cancer Institute Method by utilizing a complex, stratified, multistage probability-cluster sampling design. Results: On average, 43.0% of men and 62.5% of women met the adequate intake (120 and 90 μg/d, respectively) for phylloquinone, with the lowest self-reported intakes noted among men, especially in the older age groups (51-70 and ≥71 y). Vegetables were the highest contributor to phylloquinone intake, contributing 60.0% in the high-vegetable-intake group and 36.1% in the low-vegetable-intake group. Mixed dishes were the second-highest contributor to phylloquinone intake, contributing 16.0% in the high-vegetable-intake group and 28.0% in the low-vegetable-intake group. Conclusion: Self-reported phylloquinone intakes from updated food composition data applied to NHANES 2011-2012 reveal that fewer men than women are meeting the current adequate intake. Application of current food composition data confirms that vegetables continue to be the primary dietary source of phylloquinone in the US diet. However, mixed dishes and convenience foods have emerged as previously unrecognized but important contributors to phylloquinone intake in the United States, which challenges the assumption that phylloquinone intake is a marker of a healthy diet. These findings emphasize the need for the expansion of food composition databases that consider how mixed dishes are compiled and defined. © 2017 American Society for Nutrition.

  5. Minimum number of clusters and comparison of analysis methods for cross sectional stepped wedge cluster randomised trials with binary outcomes: A simulation study.

    PubMed

    Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick

    2017-03-09

    Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.

  6. Determining the impact of cell mixing on signaling during development.

    PubMed

    Uriu, Koichiro; Morelli, Luis G

    2017-06-01

    Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.

  7. Identifying pleiotropic genes in genome-wide association studies from related subjects using the linear mixed model and Fisher combination function.

    PubMed

    Yang, James J; Williams, L Keoki; Buu, Anne

    2017-08-24

    A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.

  8. A review of vitamin A equivalency of β-carotene in various food matrices for human consumption.

    PubMed

    Van Loo-Bouwman, Carolien A; Naber, Ton H J; Schaafsma, Gertjan

    2014-06-28

    Vitamin A equivalency of β-carotene (VEB) is defined as the amount of ingested β-carotene in μg that is absorbed and converted into 1 μg retinol (vitamin A) in the human body. The objective of the present review was to discuss the different estimates for VEB in various types of dietary food matrices. Different methods are discussed such as mass balance, dose-response and isotopic labelling. The VEB is currently estimated by the US Institute of Medicine (IOM) as 12:1 in a mixed diet and 2:1 in oil. For humans consuming β-carotene dissolved in oil, a VEB between 2:1 and 4:1 is feasible. A VEB of approximately 4:1 is applicable for biofortified cassava, yellow maize and Golden Rice, which are specially bred for human consumption in developing countries. We propose a range of 9:1-16:1 for VEB in a mixed diet that encompasses the IOM VEB of 12:1 and is realistic for a Western diet under Western conditions. For a 'prudent' (i.e. non-Western) diet including a variety of commonly consumed vegetables, a VEB could range from 9:1 to 28:1 in a mixed diet.

  9. Monitoring Saturn's Upper Atmosphere Density Variations Using Helium 584 Airglow

    NASA Astrophysics Data System (ADS)

    Parkinson, Chris

    2017-10-01

    The study of He 584 Å brightnesses is interesting as the EUV (Extreme UltraViolet) planetary airglow have the potential to yield useful information about mixing and other important parameters in its thermosphere. Resonance scattering of sunlight by He atoms is the principal source of the planetary emission of He 585 Å. The principal parameter involved in determining the He 584 Å albedo are the He volume mixing ratio, f_He, well below the homopause. Our main science objective is to estimate the helium mixing ratio in the lower atmosphere. Specifically, He emissions come from above the homopause where optical depth trau=1 in H2 and therefore the interpretation depends mainly on two parameters: He mixing ratio of the lower atmosphere and K_z. The occultations of Koskinen et al (2015) give K_z with an accuracy that has never been possible before and the combination of occultations and airglow therefore provide estimates of the mixing ratio in the lower atmosphere. We make these estimates at several locations that can be reasonably studied with both occultations and airglow and then average the results. Our results lead to a greatly improved estimate of the mixing ratio of He in the upper atmosphere and below. The second objective is to constrain the dynamics in the atmosphere by using the estimate of the He mixing ratio from the main objective. Once we have an estimate of the He mixing ratio in the lower atmosphere that agrees with both occultations and airglow, helium becomes an effective tracer species as any variations in the Cassini UVIS helium data are direct indicator of changes in K_z i.e., dynamics. Our third objective is to connect this work to our Cassini UVIS data He 584 Å airglow analyses as they both cover the time span of the observations and allow us to monitor changes in the airglow observations that may correlate with changes in the state of the atmosphere as revealed by the occultations Saturn's upper thermosphere. This work helps to determine the mixing ratio of He and constrain dynamics in the upper atmosphere, both of which are high level science objectives of the Cassini mission.

  10. Aerosol lidar observations of atmospheric mixing in Los Angeles: Climatology and implications for greenhouse gas observations.

    PubMed

    Ware, John; Kort, Eric A; DeCola, Phil; Duren, Riley

    2016-08-27

    Atmospheric observations of greenhouse gases provide essential information on sources and sinks of these key atmospheric constituents. To quantify fluxes from atmospheric observations, representation of transport-especially vertical mixing-is a necessity and often a source of error. We report on remotely sensed profiles of vertical aerosol distribution taken over a 2 year period in Pasadena, California. Using an automated analysis system, we estimate daytime mixing layer depth, achieving high confidence in the afternoon maximum on 51% of days with profiles from a Sigma Space Mini Micropulse LiDAR (MiniMPL) and on 36% of days with a Vaisala CL51 ceilometer. We note that considering ceilometer data on a logarithmic scale, a standard method, introduces, an offset in mixing height retrievals. The mean afternoon maximum mixing height is 770 m Above Ground Level in summer and 670 m in winter, with significant day-to-day variance (within season σ = 220m≈30%). Taking advantage of the MiniMPL's portability, we demonstrate the feasibility of measuring the detailed horizontal structure of the mixing layer by automobile. We compare our observations to planetary boundary layer (PBL) heights from sonde launches, North American regional reanalysis (NARR), and a custom Weather Research and Forecasting (WRF) model developed for greenhouse gas (GHG) monitoring in Los Angeles. NARR and WRF PBL heights at Pasadena are both systematically higher than measured, NARR by 2.5 times; these biases will cause proportional errors in GHG flux estimates using modeled transport. We discuss how sustained lidar observations can be used to reduce flux inversion error by selecting suitable analysis periods, calibrating models, or characterizing bias for correction in post processing.

  11. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  12. stochastic estimation of transmissivity fields conditioned to flow connectivity data

    NASA Astrophysics Data System (ADS)

    Freixas, Genis; Fernàndez-Garcia, Daniel; Sanchez-vila, Xavier

    2017-04-01

    Most methods for hydraulic parameter interpretation rely on a number of simplifications regarding the homogeneity of the underlying porous media. This way, the actual heterogeneity of any natural parameter, such as transmissivity, is transferred to the estimated in a way heavily dependent on the interpretation method used. An example is a pumping test, in most cases interpreted by means of the Cooper-Jacob method, which implicitly assumes a homogeneous isotropic confined aquifer. It was shown that the estimates obtained from this method when applied to a real site are not local values, but still have a physical meaning; the estimated transmissivity is equal to the effective transmissivity characteristic of the regional scale, while the log-ratio of the estimated storage coefficient with respect to the actual real value (assumed constant), indicated by , is an indicator of flow connectivity, representative of the scale given by the distance between the pumping and the observation wells. In this work we propose a methodology to use together with actual measurements of the log transmissivity at selected points to obtain a map of the best local transmissivity estimates using cokriging. Since the interpolation involves two variables measured at different support scales, a critical point is the estimation of the covariance and crosscovariance matrices, involving some quadratures that are obtained using some simplified approach. The method was applied to a synthetic field displaying statistical anisotropy, showing that the use of connectivity indicators mixed with the local values provide a better representation of the local value map, in particular regarding the enhanced representation of the continuity of structures corresponding to either high or low values.

  13. Kurtosis Approach Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.

  14. CHClF/sub 2/ (F-22) in the earth's atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasmussen, R.A.; Khalil, M.A.K.; Penkett, S.A.

    1980-10-01

    Recent global measurements of CHClF/sub 2/ (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O/sub 2/-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, approx.45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, approx.30 pptv). This difference is probably due to underestimates of F-22 emissions.

  15. CHClF2 (F-22) in the Earth's atmosphere

    NASA Astrophysics Data System (ADS)

    Rasmussen, R. A.; Khalil, M. A. K.; Penkett, S. A.; Prosser, N. J. D.

    1980-10-01

    Recent global measurements of CHClF2 (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O2-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, ˜45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, ˜30 pptv). This difference is probably due to underestimates of F-22 emissions.

  16. CHClF2 /F-22/ in the earth's atmosphere

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. A.; Khalil, M. A. K.; Penkett, S. A.; Prosser, N. J. D.

    1980-01-01

    Recent global measurements of CHClF2 (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O2-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, approximately 45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, approximately 30 pptv). This difference is probably due to underestimates of F-22 emissions.

  17. Prediction of forest fires occurrences with area-level Poisson mixed models.

    PubMed

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Revealed and stated preference valuation and transfer: A within-sample comparison of water quality improvement values

    NASA Astrophysics Data System (ADS)

    Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian

    2014-06-01

    Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.

  19. Feasibility of using LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.

    1985-01-01

    Research activities conducted from February 1, 1985 to July 31, 1985 and preliminary conclusions regarding research objectives are summarized. The objective is to determine the feasibility of using LANDSAT data to estimate effective hydraulic properties of soils. The general approach is to apply the climatic-climax hypothesis (Ealgeson, 1982) to natural water-limited vegetation systems using canopy cover estimated from LANDSAT data. Natural water-limited systems typically consist of inhomogeneous vegetation canopies interspersed with bare soils. The ground resolution associated with one pixel from LANDSAT MSS (or TM) data is generally greater than the scale of the plant canopy or canopy clusters. Thus a method for resolving percent canopy cover at a subpixel level must be established before the Eagleson hypothesis can be tested. Two formulations are proposed which extend existing methods of analyzing mixed pixels to naturally vegetated landscapes. The first method involves use of the normalized vegetation index. The second approach is a physical model based on radiative transfer principles. Both methods are to be analyzed for their feasibility on selected sites.

  20. Estimates of genetic and environmental (co)variances for first lactation on milk yield, survival, and calving interval.

    PubMed

    Dong, M C; van Vleck, L D

    1989-03-01

    Variance and covariance components for milk yield, survival to second freshening, calving interval in first lactation were estimated by REML with the expectation and maximization algorithm for an animal model which included herd-year-season effects. Cows without calving interval but with milk yield were included. Each of the four data sets of 15 herds included about 3000 Holstein cows. Relationships across herds were ignored to enable inversion of the coefficient matrix of mixed model equations. Quadratics and their expectations were accumulated herd by herd. Heritability of milk yield (.32) agrees with reports by same methods. Heritabilities of survival (.11) and calving interval(.15) are slightly larger and genetic correlations smaller than results from different methods of estimation. Genetic correlation between milk yield and calving interval (.09) indicates genetic ability to produce more milk is lightly associated with decreased fertility.

  1. Minimum Distance Estimation of Mixture Proportions.

    DTIC Science & Technology

    1986-12-01

    35 iii Page Bibliography . . . . . . . . . . . . . . . . . . . . 40 iv List of Tables Table Page I. Simulation Results for Mixtures of Two Exponen- 33...extended this research to the mixed Weibull, Falls(14) and Rider( 35 ) using the method of moments and Kao(26) using a graphical method, for example. In...samp(750),true(3),temp,min(3),mse,a,b,c real sammom(3),meanl,mean2,estip,x,y,z,msemom,tempt( 3 ) real xl,yl,zl integer nr,n,m,d,ier complex zsm ,zlg

  2. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy-LUR approaches.

    PubMed

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey

    2014-09-01

    Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.

  3. The Mixed Instrumental Controller: Using Value of Information to Combine Habitual Choice and Mental Simulation

    PubMed Central

    Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian

    2013-01-01

    Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation. PMID:23459512

  4. The mixed instrumental controller: using value of information to combine habitual choice and mental simulation.

    PubMed

    Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian

    2013-01-01

    Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.

  5. The Impact of Patient Safety Training on Oral and Maxillofacial Surgery Residents' Attitudes and Knowledge: A Mixed Method Case Study

    ERIC Educational Resources Information Center

    Buhrow, Suzanne

    2013-01-01

    It is estimated that in the United States, more than 40,000 patients are injured each day because of preventable medical errors. Patient safety experts and graduate medical education accreditation leaders recognize that medical education reform must include the integration of safety training focused on error causation, system engineering, and…

  6. Tamarack and black spruce adventitious root patterns are similar in their ability to estimate organic layer depths in northern temperate forests

    Treesearch

    Timothy J. Veverica; Evan S. Kane; Eric S. Kasischke

    2012-01-01

    Organic layer consumption during forest fires is hard to quantify. These data suggest that the adventitious root methods developed for reconstructing organic layer depths following wildfires in boreal black spruce forests can also be applied to mixed tamarack forests growing in temperate regions with glacially transported soils.

  7. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  8. Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Niemann, K. Olaf; Liu, Jing; Shi, Yifang; Wang, Tiejun

    2018-02-01

    Separation of foliar and woody materials using remotely sensed data is crucial for the accurate estimation of leaf area index (LAI) and woody biomass across forest stands. In this paper, we present a new method to accurately separate foliar and woody materials using terrestrial LiDAR point clouds obtained from ten test sites in a mixed forest in Bavarian Forest National Park, Germany. Firstly, we applied and compared an adaptive radius near-neighbor search algorithm with a fixed radius near-neighbor search method in order to obtain both radiometric and geometric features derived from terrestrial LiDAR point clouds. Secondly, we used a random forest machine learning algorithm to classify foliar and woody materials and examined the impact of understory and slope on the classification accuracy. An average overall accuracy of 84.4% (Kappa = 0.75) was achieved across all experimental plots. The adaptive radius near-neighbor search method outperformed the fixed radius near-neighbor search method. The classification accuracy was significantly higher when the combination of both radiometric and geometric features was utilized. The analysis showed that increasing slope and understory coverage had a significant negative effect on the overall classification accuracy. Our results suggest that the utilization of the adaptive radius near-neighbor search method coupling both radiometric and geometric features has the potential to accurately discriminate foliar and woody materials from terrestrial LiDAR data in a mixed natural forest.

  9. Statistical strategies for averaging EC50 from multiple dose-response experiments.

    PubMed

    Jiang, Xiaoqi; Kopp-Schneider, Annette

    2015-11-01

    In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.

  10. Robustifying blind image deblurring methods by simple filters

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Zeng, Xiangrong; Huangpeng, Qizi; Fan, Jun; Zhou, Jinglun; Feng, Jing

    2016-07-01

    The state-of-the-art blind image deblurring (BID) methods are sensitive to noise, and most of them can deal with only small levels of Gaussian noise. In this paper, we use simple filters to present a robust BID framework which is able to robustify exiting BID methods to high-level Gaussian noise or/and Non-Gaussian noise. Experiments on images in presence of Gaussian noise, impulse noise (salt-and-pepper noise and random-valued noise) and mixed Gaussian-impulse noise, and a real-world blurry and noisy image show that the proposed method can faster estimate sharper kernels and better images, than that obtained by other methods.

  11. Ground-water flow simulation and chemical and isotopic mixing equation analysis to determine source contributions to the Missouri River alluvial aquifer in the vicinity of the Independence, Missouri, well field

    USGS Publications Warehouse

    Kelly, Brian P.

    2002-01-01

    The city of Independence, Missouri, operates a well field in the Missouri River alluvial aquifer. Steady-state ground-water flow simulation, particle tracking, and the use of chemical and isotopic composition of river water, ground water, and well-field pumpage in a two-component mixing equation were used to determine the source contributions of induced inflow from the Missouri River and recharge to ground water from precipitation in well-field pumpage. Steady-state flow-budget analysis for the simulation-defined zone of contribution to the Independence well field indicates that 86.7 percent of well-field pumpage is from induced inflow from the river, and 6.7 percent is from ground-water recharge from precipitation. The 6.6 percent of flow from outside the simulation-defined zone of contribution is a measure of the uncertainty of the estimation, and occurs because model cells are too large to uniquely define the actual zone of contribution. Flow-budget calculations indicate that the largest source of water to most wells is the Missouri River. Particle-tracking techniques indicate that the Missouri River supplies 82.3 percent of the water to the Independence well field, ground-water recharge from precipitation supplies 9.7 percent, and flow from outside defined zones of contribution supplies 8.0 percent. Particle tracking was used to determine the relative amounts of source water to total well-field pumpage as a function of traveltime from the source. Well-field pumpage that traveled 1 year or less from the source was 8.8 percent, with 0.6 percent from the Missouri River, none from precipitation, and 8.2 percent between starting cells. Well-field pumpage that traveled 2 years or less from the source was 10.3 percent, with 1.8 percent from the Missouri River, 0.2 percent from precipitation, and 8.3 percent between starting cells. Well-field pumpage that traveled 5 years or less from the source was 36.5 percent, with 27.1 percent from the Missouri River, 1.1 percent from precipitation, and 8.3 percent between starting cells. Well-field pumpage that traveled 10 years or less from the source was 42.7 percent, with 32.6 percent from the Missouri River, 1.8 percent from precipitation, and 8.3 percent between starting cells. Well-field pumpage that traveled 25 years or less from the source was 71.9 percent, with 58.9 percent from the Missouri River, 4.7 percent from precipitation, and 8.3 percent between starting cells. Results of chemical (calcium, sodium, iron, and fluoride) and isotopic (oxygen and hydrogen) analyses of water samples collected from the Missouri River, selected monitoring wells around the Independence well field, and combined well-field pumpage were used in a two component mixing equation to estimate the relative amount of Missouri River water in total well-field pumpage. The relative amounts of induced inflow from the Missouri River in well-field pumpage ranged from 49 percent for sodium to 80 percent for calcium, and sensitivities ranged from 0 percent for iron to plus or minus 35 percent for naturally occurring stable isotope (18O). The average of all mixing equation results indicated that 61 percent of well-field pumpage was from induced inflow from the Missouri River. All methods used in the study indicate that more than one-half of the water in well-field pumpage was inflow from the Missouri River. River inflow estimates from ground-water simulation methods are larger and error values are smaller than those using chemical and isotopic data in the mixing equation, although substantial uncertainties exist for both estimation methods. Because of the complex hydrology of the aquifer near the Independence well field, the source estimates using particle tracking probably are the most reliable of the ground-water simulation methods. Mixing equation results are less reliable than those of the ground-water simulation for this study. However, more reliable results can be obtained from the mixing equatio

  12. Estimating past diameters of mixed conifer species in the central Sierra Nevada

    Treesearch

    K. Leroy Dolph

    1981-01-01

    Tree diameter outside bark at an earlier period of growth can be estimated from the linear relationship of present inside bark and outside bark diameters at breast height. This note presents equations for estimating inside bark diameters, outside bark diameters, and past outside bark diameters for each of the mixed-conifer species in the central Sierra Nevada.

  13. Estimating the reliability of repeatedly measured endpoints based on linear mixed-effects models. A tutorial.

    PubMed

    Van der Elst, Wim; Molenberghs, Geert; Hilgers, Ralf-Dieter; Verbeke, Geert; Heussen, Nicole

    2016-11-01

    There are various settings in which researchers are interested in the assessment of the correlation between repeated measurements that are taken within the same subject (i.e., reliability). For example, the same rating scale may be used to assess the symptom severity of the same patients by multiple physicians, or the same outcome may be measured repeatedly over time in the same patients. Reliability can be estimated in various ways, for example, using the classical Pearson correlation or the intra-class correlation in clustered data. However, contemporary data often have a complex structure that goes well beyond the restrictive assumptions that are needed with the more conventional methods to estimate reliability. In the current paper, we propose a general and flexible modeling approach that allows for the derivation of reliability estimates, standard errors, and confidence intervals - appropriately taking hierarchies and covariates in the data into account. Our methodology is developed for continuous outcomes together with covariates of an arbitrary type. The methodology is illustrated in a case study, and a Web Appendix is provided which details the computations using the R package CorrMixed and the SAS software. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Constrained inference in mixed-effects models for longitudinal data with application to hearing loss.

    PubMed

    Davidov, Ori; Rosen, Sophia

    2011-04-01

    In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.

  15. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  16. Influence of vegetation structure on lidar-derived canopy height and fractional cover in forested riparian buffers during leaf-off and leaf-on conditions.

    PubMed

    Wasser, Leah; Day, Rick; Chasmer, Laura; Taylor, Alan

    2013-01-01

    Estimates of canopy height (H) and fractional canopy cover (FC) derived from lidar data collected during leaf-on and leaf-off conditions are compared with field measurements from 80 forested riparian buffer plots. The purpose is to determine if existing lidar data flown in leaf-off conditions for applications such as terrain mapping can effectively estimate forested riparian buffer H and FC within a range of riparian vegetation types. Results illustrate that: 1) leaf-off and leaf-on lidar percentile estimates are similar to measured heights in all plots except those dominated by deciduous compound-leaved trees where lidar underestimates H during leaf off periods; 2) canopy height models (CHMs) underestimate H by a larger margin compared to percentile methods and are influenced by vegetation type (conifer needle, deciduous simple leaf or deciduous compound leaf) and canopy height variability, 3) lidar estimates of FC are within 10% of plot measurements during leaf-on periods, but are underestimated during leaf-off periods except in mixed and conifer plots; and 4) depth of laser pulse penetration lower in the canopy is more variable compared to top of the canopy penetration which may influence within canopy vegetation structure estimates. This study demonstrates that leaf-off lidar data can be used to estimate forested riparian buffer canopy height within diverse vegetation conditions and fractional canopy cover within mixed and conifer forests when leaf-on lidar data are not available.

  17. Influence of Vegetation Structure on Lidar-derived Canopy Height and Fractional Cover in Forested Riparian Buffers During Leaf-Off and Leaf-On Conditions

    PubMed Central

    Wasser, Leah; Day, Rick; Chasmer, Laura; Taylor, Alan

    2013-01-01

    Estimates of canopy height (H) and fractional canopy cover (FC) derived from lidar data collected during leaf-on and leaf-off conditions are compared with field measurements from 80 forested riparian buffer plots. The purpose is to determine if existing lidar data flown in leaf-off conditions for applications such as terrain mapping can effectively estimate forested riparian buffer H and FC within a range of riparian vegetation types. Results illustrate that: 1) leaf-off and leaf-on lidar percentile estimates are similar to measured heights in all plots except those dominated by deciduous compound-leaved trees where lidar underestimates H during leaf off periods; 2) canopy height models (CHMs) underestimate H by a larger margin compared to percentile methods and are influenced by vegetation type (conifer needle, deciduous simple leaf or deciduous compound leaf) and canopy height variability, 3) lidar estimates of FC are within 10% of plot measurements during leaf-on periods, but are underestimated during leaf-off periods except in mixed and conifer plots; and 4) depth of laser pulse penetration lower in the canopy is more variable compared to top of the canopy penetration which may influence within canopy vegetation structure estimates. This study demonstrates that leaf-off lidar data can be used to estimate forested riparian buffer canopy height within diverse vegetation conditions and fractional canopy cover within mixed and conifer forests when leaf-on lidar data are not available. PMID:23382966

  18. Preparation of balanced trichromatic white phosphors for solid-state white lighting.

    PubMed

    Al-Waisawy, Sara; George, Anthony F; Jadwisienczak, Wojciech M; Rahman, Faiz

    2017-08-01

    High quality white light-emitting diodes (LEDs) employ multi-component phosphor mixtures to generate light of a high color rendering index (CRI). The number of distinct components in a typical phosphor mix usually ranges from two to four. Here we describe a systematic experimental technique for starting with phosphors of known chromatic properties and arriving at their respective proportions for creating a blended phosphor to produce light of the desired chromaticity. This method is applicable to both LED pumped and laser diode (LD) pumped white light sources. In this approach, the radiometric power in the down-converted luminescence of each phosphor is determined and that information is used to estimate the CIE chromaticity coordinate of light generated from the mixed phosphor. A suitable method for mixing multi-component phosphors is also described. This paper also examines the effect of light scattering particles in phosphors and their use for altering the spectral characteristics of LD- and LED-generated light. This is the only approach available for making high efficiency phosphor-converted single-color LEDs that emit light of wide spectral width. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Environmental effects of interstate power trading on electricity consumption mixes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe Marriott; H. Scott Matthews

    2005-11-15

    Although many studies of electricity generation use national or state average generation mix assumptions, in reality a great deal of electricity is transferred between states with very different mixes of fossil and renewable fuels, and using the average numbers could result in incorrect conclusions in these studies. The authors create electricity consumption profiles for each state and for key industry sectors in the U.S. based on existing state generation profiles, net state power imports, industry presence by state, and an optimization model to estimate interstate electricity trading. Using these 'consumption mixes' can provide a more accurate assessment of electricity usemore » in life-cycle analyses. It is concluded that the published generation mixes for states that import power are misleading, since the power consumed in-state has a different makeup than the power that was generated. And, while most industry sectors have consumption mixes similar to the U.S. average, some of the most critical sectors of the economy - such as resource extraction and material processing sectors - are very different. This result does validate the average mix assumption made in many environmental assessments, but it is important to accurately quantify the generation methods for electricity used when doing life-cycle analyses. 16 refs., 7 figs., 2 tabs.« less

  20. Enthalpy generation from mixing in hohlraum-driven targets

    NASA Astrophysics Data System (ADS)

    Amendt, Peter; Milovich, Jose

    2016-10-01

    The increase in enthalpy from the physical mixing of two initially separated materials is analytically estimated and applied to ICF implosions and gas-filled hohlraums. Pressure and temperature gradients across a classical interface are shown to be the origin of enthalpy generation from mixing. The amount of enthalpy generation is estimated to be on the order of 100 Joules for a 10 micron-scale annular mixing layer between the solid deuterium-tritium fuel and the undoped high-density carbon ablator of a NIF-scale implosion. A potential resonance is found between the mixing layer thickness and gravitational (Cs2/ g) and temperature-gradient scale lengths, leading to elevated enthalpy generation. These results suggest that if mixing occurs in current capsule designs for the National Ignition Facility, the ignition margin may be appreciably eroded by the associated enthalpy of mixing. The degree of enthalpy generation from mixing of high- Z hohlraum wall material and low- Z gas fills is estimated to be on the order of 100 kJ or more for recent NIF-scale hohlraum experiments, which is consistent with the inferred missing energy based on observed delays in capsule implosion times. Work performed under the auspices of Lawrence Livermore National Security, LLC (LLNS) under Contract No. DE-AC52-07NA27344.

  1. Ice Cloud Optical Thickness and Extinction Estimates from Radar Measurements.

    NASA Astrophysics Data System (ADS)

    Matrosov, Sergey Y.; Shupe, Matthew D.; Heymsfield, Andrew J.; Zuidema, Paquita

    2003-11-01

    A remote sensing method is proposed to derive vertical profiles of the visible extinction coefficients in ice clouds from measurements of the radar reflectivity and Doppler velocity taken by a vertically pointing 35-GHz cloud radar. The extinction coefficient and its vertical integral, optical thickness τ, are among the fundamental cloud optical parameters that, to a large extent, determine the radiative impact of clouds. The results obtained with this method could be used as input for different climate and radiation models and for comparisons with parameterizations that relate cloud microphysical parameters and optical properties. An important advantage of the proposed method is its potential applicability to multicloud situations and mixed-phase conditions. In the latter case, it might be able to provide the information on the ice component of mixed-phase clouds if the radar moments are dominated by this component. The uncertainties of radar-based retrievals of cloud visible optical thickness are estimated by comparing retrieval results with optical thicknesses obtained independently from radiometric measurements during the yearlong Surface Heat Budget of the Arctic Ocean (SHEBA) field experiment. The radiometric measurements provide a robust way to estimate τ but are applicable only to optically thin ice clouds without intervening liquid layers. The comparisons of cloud optical thicknesses retrieved from radar and from radiometer measurements indicate an uncertainty of about 77% and a bias of about -14% in the radar estimates of τ relative to radiometric retrievals. One possible explanation of the negative bias is an inherently low sensitivity of radar measurements to smaller cloud particles that still contribute noticeably to the cloud extinction. This estimate of the uncertainty is in line with simple theoretical considerations, and the associated retrieval accuracy should be considered good for a nonoptical instrument, such as radar. This paper also presents relations between radar-derived characteristic cloud particle sizes and effective sizes used in models. An average relation among τ, cloud ice water path, and the layer mean value of cloud particle characteristic size is also given. This relation is found to be in good agreement with in situ measurements. Despite a high uncertainty of radar estimates of extinction, this method is useful for many clouds where optical measurements are not available because of cloud multilayering or opaqueness.

  2. Factors influencing suspended solids concentrations in activated sludge settling tanks.

    PubMed

    Kim, Y; Pipes, W O

    1999-05-31

    A significant fraction of the total mass of sludge in an activated sludge process may be in the settling tanks if the sludge has a high sludge volume index (SVI) or when a hydraulic overload occurs during a rainstorm. Under those conditions, an accurate estimate of the amount of sludge in the settling tanks is needed in order to calculate the mean cell residence time or to determine the capacity of the settling tanks to store sludge. Determination of the amount of sludge in the settling tanks requires estimation of the average concentration of suspended solids in the layer of sludge (XSB) in the bottom of the settling tanks. A widely used reference recommends averaging the concentrations of suspended solids in the mixed liquor (X) and in the underflow (Xu) from the settling tanks (XSB=0. 5{X+Xu}). This method does not take into consideration other pertinent information available to an operator. This is a report of a field study which had the objective of developing a more accurate method for estimation of the XSB in the bottom of the settling tanks. By correlation analysis, it was found that only 44% of the variation in the measured XSB is related to sum of X and Xu. XSB is also influenced by the SVI, the zone settling velocity at X and the overflow and underflow rates of the settling tanks. The method of averaging X and Xu tends to overestimate the XSB. A new empirical estimation technique for XSB was developed. The estimation technique uses dimensionless ratios; i.e., the ratio of XSB to Xu, the ratio of the overflow rate to the sum of the underflow rate and the initial settling velocity of the mixed liquor and sludge compaction expressed as a ratio (dimensionless SVI). The empirical model is compared with the method of averaging X and Xu for the entire range of sludge depths in the settling tanks and for SVI values between 100 and 300 ml/g. Since the empirical model uses dimensionless ratios, the regression parameters are also dimensionless and the model can be readily adopted for other activated sludge processes. A simplified version of the empirical model provides an estimation of XSB as a function of X, Xu and SVf and can be used by an operator when flow conditions are normal. Copyright 1999 Elsevier Science B.V.

  3. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  4. Absolute calibration of the Jenoptik CHM15k-x ceilometer and its applicability for quantitative aerosol monitoring

    NASA Astrophysics Data System (ADS)

    Geiß, Alexander; Wiegner, Matthias

    2014-05-01

    The knowledge of the spatiotemporal distribution of atmospheric aerosols and its optical characterization is essential for the understanding of the radiation budget, air quality, and climate. For this purpose, lidar is an excellent system as it is an active remote sensing technique. As multi-wavelength research lidars with depolarization channels are quite complex and cost-expensive, increasing attention is paid to so-called ceilometers. They are simple one-wavelength backscatter lidars with low pulse energy for eye-safe operation. As maintenance costs are low and continuous and unattended measurements can be performed, they are suitable for long-term aerosol monitoring in a network. However, the signal-to-noise ratio is low, and the signals are not calibrated. The only optical property that can be derived from a ceilometer is the particle backscatter coefficient, but even this quantity requires a calibration of the signals. With four years of measurements from a Jenoptik ceilometer CHM15k-x, we developed two methods for an absolute calibration on this system. This advantage of our approach is that only a few days with favorable meteorological conditions are required where Rayleigh-calibration and comparison with our research lidar is possible to estimate the lidar constant. This method enables us to derive the particle backscatter coefficient at 1064 nm, and we retrieved for the first time profiles in near real-time within an accuracy of 10 %. If an appropriate lidar ratio is assumed the aerosol optical depth of e.g. the mixing layer can be determined with an accuracy depending on the accuracy of the lidar ratio estimate. Even for 'simple' applications, e.g. assessment of the mixing layer height, cloud detection, detection of elevated aerosol layers, the particle backscatter coefficient has significant advantages over the measured (uncalibrated) attenuated backscatter. The possibility of continuous operation under nearly any meteorological condition with temporal resolution in the order of 30 seconds makes it also possible to apply time-height-tracking methods for detecting mixing layer heights. The combination of methods for edge detection (e.g. wavelet covariance transform, gradient method, variance method) and edge tracking techniques is used to increase the reliability of the layer detection and attribution. Thus, a feature mask of aerosols and clouds can be derived. Four years of measurements constitute an excellent basis for a climatology including a homogeneous time series of mixing layer heights, aerosol layers and cloud base heights of the troposphere. With a low overlap region of 180 m of the Jenoptik CHM15k-x even very narrow mixing layers, typical for winter conditions, can be considered.

  5. Comparison of two approaches to quantify anthropogenic CO2 in the ocean: Results from the northern Indian Ocean

    NASA Astrophysics Data System (ADS)

    Coatanoan, C.; Goyet, C.; Gruber, N.; Sabine, C. L.; Warner, M.

    2001-03-01

    This study compares two recent estimates of anthropogenic CO2 in the northern Indian Ocean along the World Ocean Circulation Experiment cruise II [Goyet et al., 1999; Sabine et al., 1999]. These two studies employed two different approaches to separate the anthropogenic CO2 signal from the large natural background variability. Sabine et al. [1999] used the ΔC* approach first described by Gruber et al. [1996], whereas Goyet et al. [1999] used an optimum multiparameter mixing analysis referred to as the MIX approach. Both approaches make use of similar assumptions in order to remove variations due to remineralization of organic matter and the dissolution of calcium carbonates (biological pumps). However, the two approaches use very different hypotheses in order to account for variations due to physical processes including mixing and the CO2 solubility pump. Consequently, substantial differences exist in the upper thermocline approximately between 200 and 600 m. Anthropogenic CO2 concentrations estimated using the ΔC* approach average 12 ± 4 μmol kg-1 higher in this depth range than concentrations estimated using the MIX approach. Below ˜800 m, the MIX approach estimates slightly higher anthropogenic CO2 concentrations and a deeper vertical penetration. Despite this compensatory effect, water column inventories estimated in the 0-3000 m depth range by the ΔC* approach are generally ˜20% higher than those estimated by the MIX approach, with this difference being statistically significant beyond the 0.001 level. We examine possible causes for these differences and identify a number of critical additional measurements that will make it possible to discriminate better between the two approaches.

  6. Intercomparison and validation of the mixed layer depth fields of global ocean syntheses

    NASA Astrophysics Data System (ADS)

    Toyoda, Takahiro; Fujii, Yosuke; Kuragano, Tsurane; Kamachi, Masafumi; Ishikawa, Yoichi; Masuda, Shuhei; Sato, Kanako; Awaji, Toshiyuki; Hernandez, Fabrice; Ferry, Nicolas; Guinehut, Stéphanie; Martin, Matthew J.; Peterson, K. Andrew; Good, Simon A.; Valdivieso, Maria; Haines, Keith; Storto, Andrea; Masina, Simona; Köhl, Armin; Zuo, Hao; Balmaseda, Magdalena; Yin, Yonghong; Shi, Li; Alves, Oscar; Smith, Gregory; Chang, You-Soon; Vernieres, Guillaume; Wang, Xiaochun; Forget, Gael; Heimbach, Patrick; Wang, Ou; Fukumori, Ichiro; Lee, Tong

    2017-08-01

    Intercomparison and evaluation of the global ocean surface mixed layer depth (MLD) fields estimated from a suite of major ocean syntheses are conducted. Compared with the reference MLDs calculated from individual profiles, MLDs calculated from monthly mean and gridded profiles show negative biases of 10-20 m in early spring related to the re-stratification process of relatively deep mixed layers. Vertical resolution of profiles also influences the MLD estimation. MLDs are underestimated by approximately 5-7 (14-16) m with the vertical resolution of 25 (50) m when the criterion of potential density exceeding the 10-m value by 0.03 kg m-3 is used for the MLD estimation. Using the larger criterion (0.125 kg m-3) generally reduces the underestimations. In addition, positive biases greater than 100 m are found in wintertime subpolar regions when MLD criteria based on temperature are used. Biases of the reanalyses are due to both model errors and errors related to differences between the assimilation methods. The result shows that these errors are partially cancelled out through the ensemble averaging. Moreover, the bias in the ensemble mean field of the reanalyses is smaller than in the observation-only analyses. This is largely attributed to comparably higher resolutions of the reanalyses. The robust reproduction of both the seasonal cycle and interannual variability by the ensemble mean of the reanalyses indicates a great potential of the ensemble mean MLD field for investigating and monitoring upper ocean processes.

  7. Lake Number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen

    USGS Publications Warehouse

    Robertson, Dale M.; Imberger, Jorg

    1994-01-01

    Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).

  8. Turbulent mixing and removal of ozone within an Amazon rainforest canopy

    NASA Astrophysics Data System (ADS)

    Freire, L. S.; Gerken, T.; Ruiz-Plancarte, J.; Wei, D.; Fuentes, J. D.; Katul, G. G.; Dias, N. L.; Acevedo, O. C.; Chamecki, M.

    2017-03-01

    Simultaneous profiles of turbulence statistics and mean ozone mixing ratio are used to establish a relation between eddy diffusivity and ozone mixing within the Amazon forest. A one-dimensional diffusion model is proposed and used to infer mixing time scales from the eddy diffusivity profiles. Data and model results indicate that during daytime conditions, the upper (lower) half of the canopy is well (partially) mixed most of the time and that most of the vertical extent of the forest can be mixed in less than an hour. During nighttime, most of the canopy is predominantly poorly mixed, except for periods with bursts of intermittent turbulence. Even though turbulence is faster than chemistry during daytime, both processes have comparable time scales in the lower canopy layers during nighttime conditions. Nonchemical loss time scales (associated with stomatal uptake and dry deposition) for the entire forest are comparable to turbulent mixing time scale in the lower canopy during the day and in the entire canopy during the night, indicating a tight coupling between turbulent transport and dry deposition and stomatal uptake processes. Because of the significant time of day and height variability of the turbulent mixing time scale inside the canopy, it is important to take it into account when studying chemical and biophysical processes happening in the forest environment. The method proposed here to estimate turbulent mixing time scales is a reliable alternative to currently used models, especially for situations in which the vertical distribution of the time scale is relevant.

  9. Optimal estimation of two-qubit pure-state entanglement

    NASA Astrophysics Data System (ADS)

    Acín, Antonio; Tarrach, Rolf; Vidal, Guifré

    2000-06-01

    We present optimal measuring strategies for an estimation of the entanglement of unknown two-qubit pure states and of the degree of mixing of unknown single-qubit mixed states, of which N identical copies are available. The most general measuring strategies are considered in both situations, to conclude in the first case that a local, although collective, measurement suffices to estimate entanglement, a nonlocal property, optimally.

  10. Lithium and age of pre-main sequence stars: the case of Parenago 1802

    NASA Astrophysics Data System (ADS)

    Giarrusso, M.; Tognelli, E.; Catanzaro, G.; Degl'Innocenti, S.; Dell'Omodarme, M.; Lamia, L.; Leone, F.; Pizzone, R. G.; Prada Moroni, P. G.; Romano, S.; Spitaleri, C.

    2016-04-01

    With the aim to test the present capability of the stellar surface lithium abundance in providing an estimation for the age of PMS stars, we analyze the case of the detached, double-lined, eclipsing binary system PAR 1802. For this system, the lithium age has been compared with the theoretical one, as estimated by applying a Bayesian analysis method on a large grid of stellar evolutionary models. The models have been computed for several values of chemical composition and mixing length, by means of the code FRANEC updated with the Trojan Horse reaction rates involving lithium burning.

  11. Estimates of evapotranspiration from the Ruby Lake National Wildlife Refuge area, Ruby Valley, northeastern Nevada, May 1999-October 2000

    USGS Publications Warehouse

    Berger, David L.; Johnson, Michael J.; Tumbusch, Mary L.; Mackay, Jeffrey

    2001-01-01

    The Ruby Lake National Wildlife Refuge in Ruby Valley, Nevada, contains the largest area of perennial wetlands in northeastern Nevada and provides habitat to a large number of migratory and nesting waterfowl. The long-term preservation of the refuge depends on the availability of sufficient water to maintain optimal habitat conditions. In the Ruby Valley water budget, evapotranspiration (ET) from the refuge is one of the largest components of natural outflow. To help determine the amount of inflow needed to maintain wetland habitat, estimates of ET for May 1999 through October 2000 were made at major habitats throughout the refuge. The Bowen-ratio method was used to estimate daily ET at four sites: over open water, in a moderate-to-dense cover of bulrush marsh, in a moderate cover of mixed phreatophytic shrubs, and in a desert-shrub upland. The eddy-correlation method was used to estimate daily ET for periods of 2 to 12 weeks at a meadow site and at four sites in a sparse-to-moderate cover of phreatophytic shrubs. Daily ET rates ranged from less than 0.010 inch per day at all of the sites to a maximum of 0.464 inch per day at the open-water site. Average daily ET rates estimated for open water and a bulrush marsh were about four to five times greater than in areas of mixed phreatophytic shrubs, where the depth to ground water is less than 5 feet. Based on the seasonal distribution of major habitats in the refuge and on winter and summer ET rates, an estimated total of about 89,000 acre-feet of water was consumed by ET during October 1999-September 2000 (2000 water year). Of this total, about 49,800 acre-feet was consumed by ET in areas of open water and bulrush marsh.

  12. surrosurv: An R package for the evaluation of failure time surrogate endpoints in individual patient data meta-analyses of randomized clinical trials.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan

    2018-03-01

    Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Efficient Moment-Based Inference of Admixture Parameters and Sources of Gene Flow

    PubMed Central

    Levin, Alex; Reich, David; Patterson, Nick; Berger, Bonnie

    2013-01-01

    The recent explosion in available genetic data has led to significant advances in understanding the demographic histories of and relationships among human populations. It is still a challenge, however, to infer reliable parameter values for complicated models involving many populations. Here, we present MixMapper, an efficient, interactive method for constructing phylogenetic trees including admixture events using single nucleotide polymorphism (SNP) genotype data. MixMapper implements a novel two-phase approach to admixture inference using moment statistics, first building an unadmixed scaffold tree and then adding admixed populations by solving systems of equations that express allele frequency divergences in terms of mixture parameters. Importantly, all features of the model, including topology, sources of gene flow, branch lengths, and mixture proportions, are optimized automatically from the data and include estimates of statistical uncertainty. MixMapper also uses a new method to express branch lengths in easily interpretable drift units. We apply MixMapper to recently published data for Human Genome Diversity Cell Line Panel individuals genotyped on a SNP array designed especially for use in population genetics studies, obtaining confident results for 30 populations, 20 of them admixed. Notably, we confirm a signal of ancient admixture in European populations—including previously undetected admixture in Sardinians and Basques—involving a proportion of 20–40% ancient northern Eurasian ancestry. PMID:23709261

  14. Estimating the numerical diapycnal mixing in the GO5.0 ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex; Nurser, George

    2014-05-01

    Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.

  15. The Box-Cox power transformation on nursing sensitive indicators: Does it matter if structural effects are omitted during the estimation of the transformation parameter?

    PubMed Central

    2011-01-01

    Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614

  16. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  17. Assessing the feasibility of community health insurance in Uganda: A mixed-methods exploratory analysis.

    PubMed

    Biggeri, M; Nannini, M; Putoto, G

    2018-03-01

    Community health insurance (CHI) aims to provide financial protection and facilitate health care access among poor rural populations. Given common operational challenges that hamper the full development of the scheme, there is need to undertake systematic feasibility studies. These are scarce in the literature and usually they do not provide a comprehensive analysis of the local context. The present research intends to adopt a mixed-methods approach to assess ex-ante the feasibility of CHI. In particular, eight preconditions are proposed to inform the viability of introducing the micro insurance. A case study located in rural northern Uganda is presented to test the effectiveness of the mixed-methods procedure for the feasibility purpose. A household survey covering 180 households, 8 structured focus group discussions, and 40 key informant interviews were performed between October and December 2016 in order to provide a complete and integrated analysis of the feasibility preconditions. Through the data collected at the household level, the population health seeking behaviours and the potential insurance design were examined; econometric analyses were carried out to investigate the perception of health as a priority need and the willingness to pay for the scheme. The latter component, in particular, was analysed through a contingent valuation method. The results validated the relevant feasibility preconditions. Econometric estimates demonstrated that awareness of catastrophic health expenditures and the distance to the hospital play a critical influence on household priorities and willingness to pay. Willingness is also significantly affected by socio-economic status and basic knowledge of insurance principles. Overall, the mixed-methods investigation showed that a comprehensive feasibility analysis can shape a viable CHI model to be implemented in the local context. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  19. Weighted analysis of paired microarray experiments.

    PubMed

    Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle

    2005-01-01

    In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.

  20. Observing and Simulating Diapycnal Mixing in the Canadian Arctic Archipelago

    NASA Astrophysics Data System (ADS)

    Hughes, K.; Klymak, J. M.; Hu, X.; Myers, P. G.; Williams, W. J.; Melling, H.

    2016-12-01

    High-spatial-resolution observations in the central Canadian Arctic Archipelago are analysed in conjunction with process-oriented modelling to estimate the flow pathways among the constricted waterways, understand the nature of the hydraulic control(s), and assess the influence of smaller scale (metres to kilometres) phenomena such as internal waves and topographically induced eddies. The observations repeatedly display isopycnal displacements of 50 m as dense water plunges over a sill. Depth-averaged turbulent dissipation rates near the sill estimated from these observations are typically 10-6-10-5 W kg-1, a range that is three orders of magnitude larger than that for the open ocean. These and other estimates are compared against a 1/12° basin-scale model from which we estimate diapycnal mixing rates using a volume-integrated advection-diffusion equation. Much of the mixing in this simulation is concentrated near constrictions within Barrow Strait and Queens Channel, the latter being our observational site. This suggests the model is capable of capturing topographically induced mixing. However, such mixing is expected to be enhanced in the presence of tides, a process not included in our basin scale simulation or other similar models. Quantifying this enhancement is another objective of our process-oriented modelling.

  1. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    NASA Astrophysics Data System (ADS)

    Gengler, Sarah; Bogaert, Patrick

    2014-12-01

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.

  2. Spatial scan statistics for detection of multiple clusters with arbitrary shapes.

    PubMed

    Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray

    2016-12-01

    In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.

  3. Hypogeous ectomycorrhizal fungal species on roots and in small mammal diet in a mixed-conifer forest

    Treesearch

    Antonio D. Izzo; Marc Meyer; James M. Trappe; Malcolm North; Thomas D. Bruns

    2005-01-01

    The purpose of this study was to estimate the portion of an ectomycorrhizal (ECM) fungi root community with a hypogeous fruiting habit. We used molecular methods (DNA sequence analysis of the internally transcribed spacer [ITS] region of rDNA) to compare three viewpoints: ECM fungi on the roots in a southern Sierra Nevada Abies-dominated old-growth...

  4. Longitudinal models of reading achievement of students with learning disabilities and without disabilities.

    PubMed

    Sullivan, Amanda L; Kohli, Nidhi; Farnsworth, Elyse M; Sadeh, Shanna; Jones, Leila

    2017-09-01

    Accurate estimation of developmental trajectories can inform instruction and intervention. We compared the fit of linear, quadratic, and piecewise mixed-effects models of reading development among students with learning disabilities relative to their typically developing peers. We drew an analytic sample of 1,990 students from the nationally representative Early Childhood Longitudinal Study-Kindergarten Cohort of 1998, using reading achievement scores from kindergarten through eighth grade to estimate three models of students' reading growth. The piecewise mixed-effects models provided the best functional form of the students' reading trajectories as indicated by model fit indices. Results showed slightly different trajectories between students with learning disabilities and without disabilities, with varying but divergent rates of growth throughout elementary grades, as well as an increasing gap over time. These results highlight the need for additional research on appropriate methods for modeling reading trajectories and the implications for students' response to instruction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    PubMed

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Regression Analysis of Mixed Panel Count Data with Dependent Terminal Events

    PubMed Central

    Yu, Guanglei; Zhu, Liang; Li, Yang; Sun, Jianguo; Robison, Leslie L.

    2017-01-01

    Event history studies are commonly conducted in many fields and a great deal of literature has been established for the analysis of the two types of data commonly arising from these studies: recurrent event data and panel count data. The former arises if all study subjects are followed continuously, while the latter means that each study subject is observed only at discrete time points. In reality, a third type of data, a mixture of the two types of the data above, may occur and furthermore, as with the first two types of the data, there may exist a dependent terminal event, which may preclude the occurrences of recurrent events of interest. This paper discusses regression analysis of mixed recurrent event and panel count data in the presence of a terminal event and an estimating equation-based approach is proposed for estimation of regression parameters of interest. In addition, the asymptotic properties of the proposed estimator are established and a simulation study conducted to assess the finite-sample performance of the proposed method suggests that it works well in practical situations. Finally the methodology is applied to a childhood cancer study that motivated this study. PMID:28098397

  7. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    PubMed Central

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  8. An alternative covariance estimator to investigate genetic heterogeneity in populations

    USDA-ARS?s Scientific Manuscript database

    Genomic predictions and GWAS have used mixed models for identification of associations and trait predictions. In both cases, the covariance between individuals for performance is estimated using molecular markers. Mixed model properties indicate that the use of the data for prediction is optimal if ...

  9. Detection of Mixed Infection from Bacterial Whole Genome Sequence Data Allows Assessment of Its Role in Clostridium difficile Transmission

    PubMed Central

    Eyre, David W.; Cule, Madeleine L.; Griffiths, David; Crook, Derrick W.; Peto, Tim E. A.

    2013-01-01

    Bacterial whole genome sequencing offers the prospect of rapid and high precision investigation of infectious disease outbreaks. Close genetic relationships between microorganisms isolated from different infected cases suggest transmission is a strong possibility, whereas transmission between cases with genetically distinct bacterial isolates can be excluded. However, undetected mixed infections—infection with ≥2 unrelated strains of the same species where only one is sequenced—potentially impairs exclusion of transmission with certainty, and may therefore limit the utility of this technique. We investigated the problem by developing a computationally efficient method for detecting mixed infection without the need for resource-intensive independent sequencing of multiple bacterial colonies. Given the relatively low density of single nucleotide polymorphisms within bacterial sequence data, direct reconstruction of mixed infection haplotypes from current short-read sequence data is not consistently possible. We therefore use a two-step maximum likelihood-based approach, assuming each sample contains up to two infecting strains. We jointly estimate the proportion of the infection arising from the dominant and minor strains, and the sequence divergence between these strains. In cases where mixed infection is confirmed, the dominant and minor haplotypes are then matched to a database of previously sequenced local isolates. We demonstrate the performance of our algorithm with in silico and in vitro mixed infection experiments, and apply it to transmission of an important healthcare-associated pathogen, Clostridium difficile. Using hospital ward movement data in a previously described stochastic transmission model, 15 pairs of cases enriched for likely transmission events associated with mixed infection were selected. Our method identified four previously undetected mixed infections, and a previously undetected transmission event, but no direct transmission between the pairs of cases under investigation. These results demonstrate that mixed infections can be detected without additional sequencing effort, and this will be important in assessing the extent of cryptic transmission in our hospitals. PMID:23658511

  10. Spatiotemporal Variability in Observations of Urban Mixed-Layer Heights from Surface-based Lidar Systems during DISCOVER-AQ 2011

    NASA Astrophysics Data System (ADS)

    Lewis, J. R.; Banks, R. F.; Berkoff, T.; Welton, E. J.; Joseph, E.; Thompson, A. M.; Decola, P.; Hegarty, J. D.

    2015-12-01

    Accurate characterization of the planetary boundary layer height is crucial for numerical weather prediction, estimating pollution emissions and modeling air quality. More so, given the increasing trend in global urban populations, there is a growing need to improve our understanding of the urban boundary layer structure and development. The Deriving Information on Surface conditions from COlumn and VERtically resolved observations relevant to Air Quality (DISCOVER-AQ) 2011 field campaign, which took place in the Baltimore-Washington DC region, offered a unique opportunity to study boundary layer processes in an urban area using a geographically dense collection of surface-based lidar systems (see figure). Lidars use aerosols as tracers for atmospheric boundary layer dynamics with high vertical and temporal resolutions. In this study, we use data from two permanent Micropulse Lidar Network (MPLNET) sites and five field deployed Micropulse lidar (MPL) systems in order to observe spatiotemporal variations in the daytime mixed layer height. We present and compare lidar-derived retrievals of the mixed layer height using two different methods. The first method uses the wavelet covariance transform and a "fuzzy logic" attribution scheme in order to determine the mixed layer height. The second method uses an objective approach utilizing a time-adaptive extended Kalman filter. Independent measurements of the boundary layer height are obtained using profiles from ozonesonde launches at the Beltsville and Edgewood sites for comparison with lidar observations.

  11. South Polar Ar Enhancement as a Tracer for Southern Winter Horizontal Meridional Mixing

    NASA Technical Reports Server (NTRS)

    Sprague, A. L.; Boynton, W. V.; Kim, K.; Reedy, R.; Kerry, K.; Janes, D.

    2004-01-01

    Measurements made by the Gamma Ray Spectrometer (GRS) on Mars Odyssey during 2002 and 2003 show an obvious increase in the gamma flux of 1294 keV gamma rays resulting from the decay of (41)Ar. (41)Ar is made by the capture of thermal neutrons by atmospheric (40)Ar. The increase measured above the southern polar region has permitted calculation of the increase in mixing ratio of Ar from L(sub s) 8 to 100 between latitudes 75 S and 90 S. The peak in Ar enhancement occurs about 200 Earth days after CO2 freeze-out has begun, indicating that up to this time equatorward meridional mixing is rapid enough to move enhanced Ar from the polar regions northward. Although the CO2 frost depth continues to increase from L(sub s) 110 deg to 190 deg, the Ar enhancement steadily decreases to its baseline value reached at about L(sub s) 200 deg. Our data permit an estimate of the horizontal eddy mixing coefficient useful for constraining equatorward meridional mixing during southern winter and a characteristic mixing time for the polar southern winter atmosphere. Also, using the drop in excess Ar measured by the GRS from L(sub s) 110 deg to 200 deg, we estimate an eddy coefficient appropriate for meridional mixing of the entire Ar excess back to the baseline value. The horizontal eddy mixing coefficients are derived using Ar as a tracer much as the vertical eddy mixing coefficient for the Earth's troposphere is derived using CH4 as a minor constituent tracer. The estimation of meridional mixing for high latitudes at Mars is important for constraining parameters used in atmospheric modeling and predicting seasonal and daily behavior. The calculations are order of magnitude estimates that should improve as the data set becomes more robust and improves our models.

  12. Estimating life expectancies for US small areas: a regression framework

    NASA Astrophysics Data System (ADS)

    Congdon, Peter

    2014-01-01

    Analysis of area mortality variations and estimation of area life tables raise methodological questions relevant to assessing spatial clustering, and socioeconomic inequalities in mortality. Existing small area analyses of US life expectancy variation generally adopt ad hoc amalgamations of counties to alleviate potential instability of mortality rates involved in deriving life tables, and use conventional life table analysis which takes no account of correlated mortality for adjacent areas or ages. The alternative strategy here uses structured random effects methods that recognize correlations between adjacent ages and areas, and allows retention of the original county boundaries. This strategy generalizes to include effects of area category (e.g. poverty status, ethnic mix), allowing estimation of life tables according to area category, and providing additional stabilization of estimated life table functions. This approach is used here to estimate stabilized mortality rates, derive life expectancies in US counties, and assess trends in clustering and in inequality according to county poverty category.

  13. Mixing in High Schmidt Number Turbulent Jets

    DTIC Science & Technology

    1991-01-01

    the higher Sc jet is less well mixed. The difference is less pronounced at higher Re. Flame length estimates imply either an increase in entrainment...72 8.0 Estimation of flame lengths ....................................... 74 8.1 Estim ation m...A.4) Lf flame length N number of trials (Eq. 3.1) p exponent in fits of the variance behavior with Re p probability of a binomial event (Eq. 3.1) p

  14. Devolatilization Analysis in a Twin Screw Extruder by using the Flow Analysis Network (FAN) Method

    NASA Astrophysics Data System (ADS)

    Tomiyama, Hideki; Takamoto, Seiji; Shintani, Hiroaki; Inoue, Shigeki

    We derived the theoretical formulas for three mechanisms of devolatilization in a twin screw extruder. These are flash, surface refreshment and forced expansion. The method for flash devolatilization is based on the equation of equilibrium concentration which shows that volatiles break off from polymer when they are relieved from high pressure condition. For surface refreshment devolatilization, we applied Latinen's model to allow estimation of polymer behavior in the unfilled screw conveying condition. Forced expansion devolatilization is based on the expansion theory in which foams are generated under reduced pressure and volatiles are diffused on the exposed surface layer after mixing with the injected devolatilization agent. Based on these models, we developed the simulation software of twin-screw extrusion by the FAN method and it allows us to quantitatively estimate volatile concentration and polymer temperature with a high accuracy in the actual multi-vent extrusion process for LDPE + n-hexane.

  15. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  16. Evidence of a major gene from Bayesian segregation analyses of liability to osteochondral diseases in pigs.

    PubMed

    Kadarmideen, Haja N; Janss, Luc L G

    2005-11-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.

  17. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  18. Spatiotemporal Modeling of Ozone Levels in Quebec (Canada): A Comparison of Kriging, Land-Use Regression (LUR), and Combined Bayesian Maximum Entropy–LUR Approaches

    PubMed Central

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael

    2014-01-01

    Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; http://dx.doi.org/10.1289/ehp.1306566 PMID:24879650

  19. Methane and carbon dioxide fluxes over a lake: comparison between eddy covariance, floating chambers and boundary layer method

    NASA Astrophysics Data System (ADS)

    Erkkilä, Kukka-Maaria; Ojala, Anne; Bastviken, David; Biermann, Tobias; Heiskanen, Jouni J.; Lindroth, Anders; Peltola, Olli; Rantakari, Miitta; Vesala, Timo; Mammarella, Ivan

    2018-01-01

    Freshwaters bring a notable contribution to the global carbon budget by emitting both carbon dioxide (CO2) and methane (CH4) to the atmosphere. Global estimates of freshwater emissions traditionally use a wind-speed-based gas transfer velocity, kCC (introduced by Cole and Caraco, 1998), for calculating diffusive flux with the boundary layer method (BLM). We compared CH4 and CO2 fluxes from BLM with kCC and two other gas transfer velocities (kTE and kHE), which include the effects of water-side cooling to the gas transfer besides shear-induced turbulence, with simultaneous eddy covariance (EC) and floating chamber (FC) fluxes during a 16-day measurement campaign in September 2014 at Lake Kuivajärvi in Finland. The measurements included both lake stratification and water column mixing periods. Results show that BLM fluxes were mainly lower than EC, with the more recent model kTE giving the best fit with EC fluxes, whereas FC measurements resulted in higher fluxes than simultaneous EC measurements. We highly recommend using up-to-date gas transfer models, instead of kCC, for better flux estimates. BLM CO2 flux measurements had clear differences between daytime and night-time fluxes with all gas transfer models during both stratified and mixing periods, whereas EC measurements did not show a diurnal behaviour in CO2 flux. CH4 flux had higher values in daytime than night-time during lake mixing period according to EC measurements, with highest fluxes detected just before sunset. In addition, we found clear differences in daytime and night-time concentration difference between the air and surface water for both CH4 and CO2. This might lead to biased flux estimates, if only daytime values are used in BLM upscaling and flux measurements in general. FC measurements did not detect spatial variation in either CH4 or CO2 flux over Lake Kuivajärvi. EC measurements, on the other hand, did not show any spatial variation in CH4 fluxes but did show a clear difference between CO2 fluxes from shallower and deeper areas. We highlight that while all flux measurement methods have their pros and cons, it is important to carefully think about the chosen method and measurement interval, as well as their effects on the resulting flux.

  20. Analysis of data collected from right and left limbs: Accounting for dependence and improving statistical efficiency in musculoskeletal research.

    PubMed

    Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C

    2018-01-01

    Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  2. A method for environmental acoustic analysis improvement based on individual evaluation of common sources in urban areas.

    PubMed

    López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón

    2014-01-15

    Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.

  3. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  4. A novel fluorescence microscopy approach to estimate quality loss of stored fruit fillings as a result of browning.

    PubMed

    Cropotova, Janna; Tylewicz, Urszula; Cocci, Emiliano; Romani, Santina; Dalla Rosa, Marco

    2016-03-01

    The aim of the present study was to estimate the quality deterioration of apple fillings during storage. Moreover, a potentiality of novel time-saving and non-invasive method based on fluorescence microscopy for prompt ascertainment of non-enzymatic browning initiation in fruit fillings was investigated. Apple filling samples were obtained by mixing different quantities of fruit and stabilizing agents (inulin, pectin and gellan gum), thermally processed and stored for 6-month. The preservation of antioxidant capacity (determined by DPPH method) in apple fillings was indirectly correlated with decrease in total polyphenols content that varied from 34±22 to 56±17% and concomitant accumulation of 5-hydroxymethylfurfural (HMF), ranging from 3.4±0.1 to 8±1mg/kg in comparison to initial apple puree values. The mean intensity of the fluorescence emission spectra of apple filling samples and initial apple puree was highly correlated (R(2)>0.95) with the HMF content, showing a good potentiality of fluorescence microscopy method to estimate non-enzymatic browning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Comparison of three methods to derive canopy-scale flux measurements above a mixed oak and hornbeam forest in Northern Italy

    NASA Astrophysics Data System (ADS)

    Acton, William; Schallhart, Simon; Langford, Ben; Valach, Amy; Rantala, Pekka; Fares, Silvano; Carriero, Giulia; Mentel, Thomas; Tomlinson, Sam; Dragosits, Ulrike; Hewitt, Nicholas; Nemitz, Eiko

    2015-04-01

    Plants emit a wide range of Biogenic Volatile Organic Compounds (BVOCs) into the atmosphere. These BVOCs are a major source of reactive carbon into the troposphere and play an important role in atmospheric chemistry by, for example, acting as an OH sink and contributing to the formation of secondary organic aerosol. While the emission rates of some of these compounds are relatively well understood, large uncertainties are still associated with the emission estimates of many compounds. Here the fluxes and mixing ratios of BVOCs recorded during June/July 2012 over the Bosco Fontana forest reserve in northern Italy are reported and discussed, together with a comparison of three methods of flux calculation. This work was carried out as a part of the EC FP7 project ECLAIRE (Effects of Climate Change on Air Pollution and Response Strategies for European Ecosystems). The Bosco Fontana reserve is a semi natural deciduous forest dominated by Carpinus betulus (hornbeam), Quercus robur (pedunculate oak) and Quercus rubra (northern red oak). Virtual disjunct eddy covariance measurements made using Proton Transfer Reaction-Mass Spectrometry (PTR-MS) and Proton Transfer Reaction-Time of Flight-Mass Spectrometry (PTR-ToF-MS) were used to calculate fluxes and mixing ratios of BVOCs above the forest canopy at Bosco Fontana. BVOC mixing ratios were dominated by methanol with acetaldehyde, acetone, acetic acid, isoprene, the sum of methyl vinyl ketone and methacrolein, methyl ethyl ketone and monoterpenes also recorded. A large flux of isoprene was observed as well as significant fluxes of monoterpenes, methanol, acetaldehyde and methyl vinyl ketone / methacrolein. The fluxes recorded using the PTR-MS and PTR-ToF-MS showed good agreement. Comparison of the isoprene fluxes calculated using these instruments also agreed well with fluxes modelled using the MEGAN algorithms (Guenther et al. 2006). The detailed tree distribution maps for the forest at Bosco Fontana compiled by Dalponte et al. 2007 enable the estimation of flux from leaf level emissions data. This 'bottom up' estimate will be compared with the fluxes recorded using PTR-MS and PTR-ToF-MS. References Dalponte M., Gianelle D. and Bruzzone L.: Use of hyperspectral and LIDAR data for classification of complex forest areas. Canopy Analysis and Dynamics of a Floodplain Forest: 25-37, 2007 Guenther A., Karl T., Harley P., Wiedinmyer C., Palmer P.I. and Geron C.: Estimates of global terrestrial isoprene emissions using MEGAN (Model of Emissions of Gases and Aerosols from Nature). Atmospheric Chemistry and Physics, 6, 3180-3210, 2006

  6. Mixed effects modelling for glass category estimation from glass refractive indices.

    PubMed

    Lucy, David; Zadora, Grzegorz

    2011-10-10

    520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Effect of patient selection method on provider group performance estimates.

    PubMed

    Thorpe, Carolyn T; Flood, Grace E; Kraft, Sally A; Everett, Christine M; Smith, Maureen A

    2011-08-01

    Performance measurement at the provider group level is increasingly advocated, but different methods for selecting patients when calculating provider group performance have received little evaluation. We compared 2 currently used methods according to characteristics of the patients selected and impact on performance estimates. We analyzed Medicare claims data for fee-for-service beneficiaries with diabetes ever seen at an academic multispeciality physician group in 2003 to 2004. We examined sample size, sociodemographics, clinical characteristics, and receipt of recommended diabetes monitoring in 2004 for the groups of patients selected using 2 methods implemented in large-scale performance initiatives: the Plurality Provider Algorithm and the Diabetes Care Home method. We examined differences among discordantly assigned patients to determine evidence for differential selection regarding these measures. Fewer patients were selected under the Diabetes Care Home method (n=3558) than the Plurality Provider Algorithm (n=4859). Compared with the Plurality Provider Algorithm, the Diabetes Care Home method preferentially selected patients who were female, not entitled because of disability, older, more likely to have hypertension, and less likely to have kidney disease and peripheral vascular disease, and had lower levels of predicted utilization. Diabetes performance was higher under Diabetes Care Home method, with 67% versus 58% receiving >1 A1c tests, 70% versus 65% receiving ≥1 low-density lipoprotein (LDL) test, and 38% versus 37% receiving an eye examination. The method used to select patients when calculating provider group performance may affect patient case mix and estimated performance levels, and warrants careful consideration when comparing performance estimates.

  8. Adaptive finite element method for turbulent flow near a propeller

    NASA Astrophysics Data System (ADS)

    Pelletier, Dominique; Ilinca, Florin; Hetu, Jean-Francois

    1994-11-01

    This paper presents an adaptive finite element method based on remeshing to solve incompressible turbulent free shear flow near a propeller. Solutions are obtained in primitive variables using a highly accurate finite element approximation on unstructured grids. Turbulence is modeled by a mixing length formulation. Two general purpose error estimators, which take into account swirl and the variation of the eddy viscosity, are presented and applied to the turbulent wake of a propeller. Predictions compare well with experimental measurements. The proposed adaptive scheme is robust, reliable and cost effective.

  9. Preparation of AgInSe2 thin films grown by vacuum evaporation method

    NASA Astrophysics Data System (ADS)

    Matsuo, H.; Yoshino, K.; Ikari, T.

    2006-09-01

    Polycrystalline AgInSe2 thin films were successfully grown on glass substrates by an evaporation method. The starting materials were stoichiometrically mixed Ag2Se and In2Se3 powders. X-ray diffraction revealed that the sample annealed at 600 °C consisted of AgInSe2 single phase, with (112) orientation and a large grain size. The lattice constant (a axis) was close to JCPDS values. From optical transmittance and reflectance measurements, the bandgap energy was estimated to be 1.17 eV.

  10. Aerosol lidar observations of atmospheric mixing in Los Angeles: Climatology and implications for greenhouse gas observations

    NASA Astrophysics Data System (ADS)

    Ware, John; Kort, Eric A.; DeCola, Phil; Duren, Riley

    2016-08-01

    Atmospheric observations of greenhouse gases provide essential information on sources and sinks of these key atmospheric constituents. To quantify fluxes from atmospheric observations, representation of transport—especially vertical mixing—is a necessity and often a source of error. We report on remotely sensed profiles of vertical aerosol distribution taken over a 2 year period in Pasadena, California. Using an automated analysis system, we estimate daytime mixing layer depth, achieving high confidence in the afternoon maximum on 51% of days with profiles from a Sigma Space Mini Micropulse LiDAR (MiniMPL) and on 36% of days with a Vaisala CL51 ceilometer. We note that considering ceilometer data on a logarithmic scale, a standard method, introduces, an offset in mixing height retrievals. The mean afternoon maximum mixing height is 770 m Above Ground Level in summer and 670 m in winter, with significant day-to-day variance (within season σ = 220m≈30%). Taking advantage of the MiniMPL's portability, we demonstrate the feasibility of measuring the detailed horizontal structure of the mixing layer by automobile. We compare our observations to planetary boundary layer (PBL) heights from sonde launches, North American regional reanalysis (NARR), and a custom Weather Research and Forecasting (WRF) model developed for greenhouse gas (GHG) monitoring in Los Angeles. NARR and WRF PBL heights at Pasadena are both systematically higher than measured, NARR by 2.5 times; these biases will cause proportional errors in GHG flux estimates using modeled transport. We discuss how sustained lidar observations can be used to reduce flux inversion error by selecting suitable analysis periods, calibrating models, or characterizing bias for correction in post processing.

  11. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  12. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    PubMed Central

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  13. Improving Video Based Heart Rate Monitoring.

    PubMed

    Lin, Jian; Rozado, David; Duenser, Andreas

    2015-01-01

    Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.

  14. Population genetics of autopolyploids under a mixed mating model and the estimation of selfing rate.

    PubMed

    Hardy, Olivier J

    2016-01-01

    Nowadays, the population genetics analysis of autopolyploid species faces many difficulties due to (i) limited development of population genetics tools under polysomic inheritance, (ii) difficulties to assess allelic dosage when genotyping individuals and (iii) a form of inbreeding resulting from the mechanism of 'double reduction'. Consequently, few data analysis computer programs are applicable to autopolyploids. To contribute bridging this gap, this article first derives theoretical expectations for the inbreeding and identity disequilibrium coefficients under polysomic inheritance in a mixed mating model. Moment estimators of these coefficients are proposed when exact genotypes or just markers phenotypes (i.e. allelic dosage unknown) are available. This led to the development of estimators of the selfing rate based on adult genotypes or phenotypes and applicable to any even-ploidy level. Their statistical performances and robustness were assessed by numerical simulations. Contrary to inbreeding-based estimators, the identity disequilibrium-based estimator using phenotypes is robust (absolute bias generally < 0.05), even in the presence of double reduction, null alleles or biparental inbreeding due to isolation by distance. A fairly good precision of the selfing rate estimates (root mean squared error < 0.1) is already achievable using a sample of 30-50 individuals phenotyped at 10 loci bearing 5-10 alleles each, conditions reachable using microsatellite markers. Diallelic markers (e.g. SNP) can also perform satisfactorily in diploids and tetraploids but more polymorphic markers are necessary for higher ploidy levels. The method is implemented in the software SPAGeDi and should contribute to reduce the lack of population genetics tools applicable to autopolyploids. © 2015 John Wiley & Sons Ltd.

  15. Groundwater flow, quality (2007-10), and mixing in the Wind Cave National Park area, South Dakota

    USGS Publications Warehouse

    Long, Andrew J.; Ohms, Marc J.; McKaskey, Jonathan D.R.G.

    2012-01-01

    A study of groundwater flow, quality, and mixing in relation to Wind Cave National Park in western South Dakota was conducted during 2007-11 by the U.S. Geological Survey in cooperation with the National Park Service because of water-quality concerns and to determine possible sources of groundwater contamination in the Wind Cave National Park area. A large area surrounding Wind Cave National Park was included in this study because to understand groundwater in the park, a general understanding of groundwater in the surrounding southern Black Hills is necessary. Three aquifers are of particular importance for this purpose: the Minnelusa, Madison, and Precambrian aquifers. Multivariate methods applied to hydrochemical data, consisting of principal component analysis (PCA), cluster analysis, and an end-member mixing model, were applied to characterize groundwater flow and mixing. This provided a way to assess characteristics important for groundwater quality, including the differentiation of hydrogeologic domains within the study area, sources of groundwater to these domains, and groundwater mixing within these domains. Groundwater and surface-water samples collected for this study were analyzed for common ions (calcium, magnesium, sodium, bicarbonate, chloride, silica, and sulfate), arsenic, stable isotopes of oxygen and hydrogen, specific conductance, and pH. These 12 variables were used in all multivariate methods. A total of 100 samples were collected from 60 sites from 2007 to 2010 and included stream sinks, cave drip, cave water bodies, springs, and wells. In previous approaches that combined PCA with end-member mixing, extreme-value samples identified by PCA typically were assumed to represent end members. In this study, end members were not assumed to have been sampled but rather were estimated and constrained by prior hydrologic knowledge. Also, the end-member mixing model was quantified in relation to hydrogeologic domains, which focuses model results on major hydrologic processes. Finally, conservative tracers were weighted preferentially in model calibration, which distributed model errors of optimized values, or residuals, more appropriately than would otherwise be the case The latter item also provides an estimate of the relative effect of geochemical evolution along flow paths in comparison to mixing. The end-member mixing model estimated that Wind Cave sites received 38 percent of their groundwater inflow from local surface recharge, 34 percent from the upgradient Precambrian aquifer, 26 percent from surface recharge to the west, and 2 percent from regional flow. Artesian springs primarily received water from end members assumed to represent regional groundwater flow. Groundwater samples were collected and analyzed for chlorofluorocarbons, dissolved gasses (argon, carbon dioxide, methane, nitrogen, and oxygen), and tritium at selected sites and used to estimate groundwater age. Apparent ages, or model ages, for the Madison aquifer in the study area indicate that groundwater closest to surface recharge areas is youngest, with increasing age in a downgradient direction toward deeper parts of the aquifer. Arsenic concentrations in samples collected for this study ranged from 0.28 to 37.1 micrograms per liter (μg/L) with a median value of 6.4 μg/L, and 32 percent of these exceeded 10 μg/L. The highest arsenic concentrations in and near the study area are approximately coincident with the outcrop of the Minnelusa Formation and likely originated from arsenic in shale layers in this formation. Sample concentrations of nitrate plus nitrite were less than 2 milligrams per liter for 92 percent of samples collected, which is not a concern for drinking-water quality. Water samples were collected in the park and analyzed for five trace metals (chromium, copper, lithium, vanadium, and zinc), the concentrations of which did not correlate with arsenic. Dye tracing indicated hydraulic connection between three water bodies in Wind Cave.

  16. Comparison of sexual mixing patterns for syphilis in endemic and outbreak settings.

    PubMed

    Doherty, Irene A; Adimora, Adaora A; Muth, Stephen Q; Serre, Marc L; Leone, Peter A; Miller, William C

    2011-05-01

    In a largely rural region of North Carolina during 1998-2002, outbreaks of heterosexually transmitted syphilis occurred, tied to crack cocaine use and exchange of sex for drugs and money. Sexual partnership mixing patterns are an important characteristic of sexual networks that relate to transmission dynamics of sexually transmitted infections (STIs). Using contact tracing data collected by disease intervention specialists, we estimated Newman assortativity coefficients and compared values in counties experiencing syphilis outbreaks to nonoutbreak counties, with respect to race/ethnicity, race/ethnicity and age, and the cases' number of social/sexual contacts, infected contacts, sex partners, and infected sex partners, and syphilis disease stage (primary, secondary, early latent). Individuals in the outbreak counties had more contacts and mixing by the number of sex partners was disassortative in outbreak counties and assortative nonoutbreak counties. Although mixing by syphilis disease stage was minimally assortative in outbreak counties, it was disassortative in nonoutbreak areas. Partnerships were relatively discordant by age, especially among older white men, who often chose considerably younger female partners. Whether assortative mixing exacerbates or attenuates the reach of STIs into different populations depends on the characteristic/attribute and epidemiologic phase. Examination of sexual partnership characteristics and mixing patterns offers insights into the growth of STI outbreaks that complement other research methods.

  17. Heat balances of the surface mixed layer in the equatorial Atlantic and Indian Ocean during FGGE

    NASA Technical Reports Server (NTRS)

    Molinari, R. L.

    1985-01-01

    Surface meteorological and surface and subsurface oceanographic data collected during FGGE in the equatorial Atlantic and Indian Oceans are used to estimate the terms in a heat balance relation for the mixed layer. The first balance tested is between changes in mixed layer temperature (MLT) and surface energy fluxes. Away from regions of low variance in MLT time series and equatorial and coastal upwelling, surface fluxes can account for 75 percent of the variance in the observed time series. Differences between observed and estimated MLTs indicate that on the average, maximum errors in surface flux are of the order of 20 to 30 W/sq m. In the Atlantic, the addition of zonal advection does not significantly improve the estimates. However in regions of equatorial upwelling, the eastern Atlantic vertical mixing and meridional advection can play an important role in the evolution of MLTs.

  18. Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction

    NASA Astrophysics Data System (ADS)

    Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.

    1996-11-01

    Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?

  19. HMA runoff data

    EPA Pesticide Factsheets

    Excel workbook, First sheet is data dictionary. second sheet is the data representing the abstraction for events with short antecedent dry period (less than 24 hr) This dataset is associated with the following publication:Brown , R., and M. Borst. Evaluating the Accuracy of Common Runoff Estimation Methods for New Impervious Hot-Mix Asphalt. Journal of Sustainable Water in the Built Environment. American Society of Civil Engineers (ASCE), New York, NY, USA, online, (2015).

  20. Mixed methods research in mental health nursing.

    PubMed

    Kettles, A M; Creswell, J W; Zhang, W

    2011-08-01

    Mixed methods research is becoming more widely used in order to answer research questions and to investigate research problems in mental health and psychiatric nursing. However, two separate literature searches, one in Scotland and one in the USA, revealed that few mental health nursing studies identified mixed methods research in their titles. Many studies used the term 'embedded' but few studies identified in the literature were mixed methods embedded studies. The history, philosophical underpinnings, definition, types of mixed methods research and associated pragmatism are discussed, as well as the need for mixed methods research. Examples of mental health nursing mixed methods research are used to illustrate the different types of mixed methods: convergent parallel, embedded, explanatory and exploratory in their sequential and concurrent combinations. Implementing mixed methods research is also discussed briefly and the problem of identifying mixed methods research in mental and psychiatric nursing are discussed with some possible solutions to the problem proposed. © 2011 Blackwell Publishing.

  1. Mixed models for selection of Jatropha progenies with high adaptability and yield stability in Brazilian regions.

    PubMed

    Teodoro, P E; Bhering, L L; Costa, R D; Rocha, R B; Laviola, B G

    2016-08-19

    The aim of this study was to estimate genetic parameters via mixed models and simultaneously to select Jatropha progenies grown in three regions of Brazil that meet high adaptability and stability. From a previous phenotypic selection, three progeny tests were installed in 2008 in the municipalities of Planaltina-DF (Midwest), Nova Porteirinha-MG (Southeast), and Pelotas-RS (South). We evaluated 18 families of half-sib in a randomized block design with three replications. Genetic parameters were estimated using restricted maximum likelihood/best linear unbiased prediction. Selection was based on the harmonic mean of the relative performance of genetic values method in three strategies considering: 1) performance in each environment (with interaction effect); 2) performance in each environment (with interaction effect); and 3) simultaneous selection for grain yield, stability and adaptability. Accuracy obtained (91%) reveals excellent experimental quality and consequently safety and credibility in the selection of superior progenies for grain yield. The gain with the selection of the best five progenies was more than 20%, regardless of the selection strategy. Thus, based on the three selection strategies used in this study, the progenies 4, 11, and 3 (selected in all environments and the mean environment and by adaptability and phenotypic stability methods) are the most suitable for growing in the three regions evaluated.

  2. Comparison of two methods for estimating discharge and nutrient loads from Tidally affected reaches of the Myakka and Peace Rivers, West-Central Florida

    USGS Publications Warehouse

    Levesque, V.A.; Hammett, K.M.

    1997-01-01

    The Myakka and Peace River Basins constitute more than 60 percent of the total inflow area and contribute more than half the total tributary inflow to the Charlotte Harbor estuarine system. Water discharge and nutrient enrichment have been identified as significant concerns in the estuary, and consequently, it is important to accurately estimate the magnitude of discharges and nutrient loads transported by inflows from both rivers. Two methods for estimating discharge and nutrient loads from tidally affected reaches of the Myakka and Peace Rivers were compared. The first method was a tidal-estimation method, in which discharge and nutrient loads were estimated based on stage, water-velocity, discharge, and water-quality data collected near the mouths of the rivers. The second method was a traditional basin-ratio method in which discharge and nutrient loads at the mouths were estimated from discharge and loads measured at upstream stations. Stage and water-velocity data were collected near the river mouths by submersible instruments, deployed in situ, and discharge measurements were made with an acoustic Doppler current profiler. The data collected near the mouths of the Myakka River and Peace River were filtered, using a low-pass filter, to remove daily mixed-tide effects with periods less than about 2 days. The filtered data from near the river mouths were used to calculate daily mean discharge and nutrient loads. These tidal-estimation-method values were then compared to the basin-ratio-method values. Four separate 30-day periods of differing streamflow conditions were chosen for monitoring and comparison. Discharge and nutrient load estimates computed from the tidal-estimation and basin-ratio methods were most similar during high-flow periods. However, during high flow, the values computed from the tidal-estimation method for the Myakka and Peace Rivers were consistently lower than the values computed from the basin-ratio method. There were substantial differences between discharges and nutrient loads computed from the tidal-estimation and basin-ratio methods during low-flow periods. Furthermore, the differences between the methods were not consistent. Discharges and nutrient loads computed from the tidal-estimation method for the Myakka River were higher than those computed from the basin-ratio method, whereas discharges and nutrients loads computed by the tidal-estimation method for the Peace River were not only lower than those computed from the basin-ratio method, but they actually reflected a negative, or upstream, net movement. Short-term tidal measurement results should be used with caution, because antecedent conditions can influence the discharge and nutrient loads. Continuous tidal data collected over a 1- or 2-year period would be necessary to more accurately estimate the tidally affected discharge and nutrient loads for the Myakka and Peace River Basins.

  3. Accuracy of noninvasive breath methane measurements using Fourier transform infrared methods on individual cows.

    PubMed

    Lassen, J; Løvendahl, P; Madsen, J

    2012-02-01

    Individual methane (CH(4)) production was recorded repeatedly on 93 dairy cows during milking in an automatic milking system (AMS), with the aim of estimating individual cow differences in CH(4) production. Methane and CO(2) were measured with a portable air sampler and analyzer unit based on Fourier transform infrared (FTIR) detection. The cows were 50 Holsteins and 43 Jerseys from mixed parities and at all stages of lactation (mean=156 d in milk). Breath was captured by the FTIR unit inlet nozzle, which was placed in front of the cow's head in each of the 2 AMS as an admixture to normal barn air. The FTIR unit was running continuously for 3 d in each of 2 AMS units, 1 with Holstein and another with Jersey cows. Air was analyzed every 20 s. From each visit of a cow to the AMS, CH(4) and CO(2) records were summarized into the mean, median, 75, and 90% quantiles. Furthermore, the ratio between CH(4) and CO(2) was used as a derived measure with the idea of using CO(2) in breath as a tracer gas to quantify the production of methane. Methane production records were analyzed with a mixed model, containing cow as random effect. Fixed effects of milk yield and daily intake of the total mixed ration and concentrates were also estimated. The repeatability of the CH(4)-to-CO(2) ratio was 0.39 for Holsteins and 0.34 for Jerseys. Both concentrate intake and total mixed ration intake were positively related to CH(4) production, whereas milk production level was not correlated with CH(4) production. In conclusion, the results from this study suggest that the CH(4)-to-CO(2) ratio measured using the noninvasive method is an asset of the individual cow and may be useful in both management and genetic evaluations. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  5. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  6. Neither fixed nor random: weighted least squares meta-regression.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2017-03-01

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. A generalized mixed effects model of abundance for mark-resight data when sampling is without replacement

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.; Pryde, M.A.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    In recent years, the mark-resight method for estimating abundance when the number of marked individuals is known has become increasingly popular. By using field-readable bands that may be resighted from a distance, these techniques can be applied to many species, and are particularly useful for relatively small, closed populations. However, due to the different assumptions and general rigidity of the available estimators, researchers must often commit to a particular model without rigorous quantitative justification for model selection based on the data. Here we introduce a nonlinear logit-normal mixed effects model addressing this need for a more generalized framework. Similar to models available for mark-recapture studies, the estimator allows a wide variety of sampling conditions to be parameterized efficiently under a robust sampling design. Resighting rates may be modeled simply or with more complexity by including fixed temporal and random individual heterogeneity effects. Using information theory, the model(s) best supported by the data may be selected from the candidate models proposed. Under this generalized framework, we hope the uncertainty associated with mark-resight model selection will be reduced substantially. We compare our model to other mark-resight abundance estimators when applied to mainland New Zealand robin (Petroica australis) data recently collected in Eglinton Valley, Fiordland National Park and summarize its performance in simulation experiments.

  8. Considerations Underlying the Use of Mixed Group Validation

    ERIC Educational Resources Information Center

    Jewsbury, Paul A.; Bowden, Stephen C.

    2013-01-01

    Mixed Group Validation (MGV) is an approach for estimating the diagnostic accuracy of tests. MGV is a promising alternative to the more commonly used Known Groups Validation (KGV) approach for estimating diagnostic accuracy. The advantage of MGV lies in the fact that the approach does not require a perfect external validity criterion or gold…

  9. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  10. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews.

    PubMed

    Pluye, Pierre; Hong, Quan Nha

    2014-01-01

    This article provides an overview of mixed methods research and mixed studies reviews. These two approaches are used to combine the strengths of quantitative and qualitative methods and to compensate for their respective limitations. This article is structured in three main parts. First, the epistemological background for mixed methods will be presented. Afterward, we present the main types of mixed methods research designs and techniques as well as guidance for planning, conducting, and appraising mixed methods research. In the last part, we describe the main types of mixed studies reviews and provide a tool kit and examples. Future research needs to offer guidance for assessing mixed methods research and reporting mixed studies reviews, among other challenges.

  11. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective.

    PubMed

    Bishop, Felicity L

    2015-02-01

    To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.

  12. The Cost of Universal Health Care in India: A Model Based Estimate

    PubMed Central

    Prinja, Shankar; Bahuguna, Pankaj; Pinto, Andrew D.; Sharma, Atul; Bharaj, Gursimer; Kumar, Vishal; Tripathy, Jaya Prasad; Kaur, Manmeet; Kumar, Rajesh

    2012-01-01

    Introduction As high out-of-pocket healthcare expenses pose heavy financial burden on the families, Government of India is considering a variety of financing and delivery options to universalize health care services. Hence, an estimate of the cost of delivering universal health care services is needed. Methods We developed a model to estimate recurrent and annual costs for providing health services through a mix of public and private providers in Chandigarh located in northern India. Necessary health services required to deliver good quality care were defined by the Indian Public Health Standards. National Sample Survey data was utilized to estimate disease burden. In addition, morbidity and treatment data was collected from two secondary and two tertiary care hospitals. The unit cost of treatment was estimated from the published literature. For diseases where data on treatment cost was not available, we collected data on standard treatment protocols and cost of care from local health providers. Results We estimate that the cost of universal health care delivery through the existing mix of public and private health institutions would be INR 1713 (USD 38, 95%CI USD 18–73) per person per annum in India. This cost would be 24% higher, if branded drugs are used. Extrapolation of these costs to entire country indicates that Indian government needs to spend 3.8% (2.1%–6.8%) of the GDP for universalizing health care services. Conclusion The cost of universal health care delivered through a combination of public and private providers is estimated to be INR 1713 per capita per year in India. Important issues such as delivery strategy for ensuring quality, reducing inequities in access, and managing the growth of health care demand need be explored. PMID:22299038

  13. Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion

    NASA Technical Reports Server (NTRS)

    Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri

    2010-01-01

    Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.

  14. Influence of an urban canopy model and PBL schemes on vertical mixing for air quality modeling over Greater Paris

    NASA Astrophysics Data System (ADS)

    Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick

    2015-04-01

    Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is under-estimated on both days in the residual layer, and over-estimated on both days over the residual layer. The under-estimations in the residual layer are partly due to difficulties to estimate the PBL height, to an over-estimation of vertical mixing during nighttime at high altitudes and to uncertainties in PM10 emissions. The PBL schemes and the UCM influence the PM vertical distributions not only because they influence vertical mixing (PBL height and eddy-diffusion coefficient), but also horizontal wind fields and humidity. However, for the UCM, it is the influence on vertical mixing that impacts the most the PM10 vertical distribution below 1.5 km.

  15. Predicting high risk births with contraceptive prevalence and contraceptive method-mix in an ecologic analysis.

    PubMed

    Perin, Jamie; Amouzou, Agbessi; Walker, Neff

    2017-11-07

    Increased contraceptive use has been associated with a decrease in high parity births, births that occur close together in time, and births to very young or to older women. These types of births are also associated with high risk of under-five mortality. Previous studies have looked at the change in the level of contraception use and the average change in these types of high-risk births. We aim to predict the distribution of births in a specific country when there is a change in the level and method of modern contraception. We used data from full birth histories and modern contraceptive use from 207 nationally representative Demographic and Health Surveys covering 71 countries to describe the distribution of births in each survey based on birth order, preceding birth space, and mother's age at birth. We estimated the ecologic associations between the prevalence and method-mix of modern contraceptives and the proportion of births in each category. Hierarchical modelling was applied to these aggregated cross sectional proportions, so that random effects were estimated for countries with multiple surveys. We use these results to predict the change in type of births associated with scaling up modern contraception in three different scenarios. We observed marked differences between regions, in the absolute rates of contraception, the types of contraceptives in use, and in the distribution of type of birth. Contraceptive method-mix was a significant determinant of proportion of high-risk births, especially for birth spacing, but also for mother's age and parity. Increased use of modern contraceptives is especially predictive of reduced parity and more births with longer preceding space. However, increased contraception alone is not associated with fewer births to women younger than 18 years or a decrease in short-spaced births. Both the level and the type of contraception are important factors in determining the effects of family planning on changes in distribution of high-risk births. The best predictions for how birth risk changes with increased modern contraception and for different contraception methods allow for more nuanced predictions specific to each country and can aid better planning for the scaling up of modern contraception.

  16. Cost-of-illness studies of atrial fibrillation: methodological considerations.

    PubMed

    Becker, Christian

    2014-10-01

    Atrial fibrillation (AF) is the most common heart rhythm arrhythmia, which has considerable economic consequences. This study aims to identify the current cost-of-illness estimates of AF; a focus was put on describing the studies' methodology. A literature review was conducted. Twenty-eight cost-of-illness studies were identified. Cost-of-illness estimates exist for health insurance members, hospital and primary care populations. In addition, the cost of stroke in AF patients and the costs of post-operative AF were calculated. The methods used were heterogeneous, mostly studies calculated excess costs. The identified annual excess costs varied, even among studies from the USA (∼US$1900 to ∼US$19,000). While pointing toward considerable costs, the cost-of-illness studies' relevance could be improved by focusing on subpopulations and treatment mixes. As possible starting points for subsequent economic studies, the methodology of cost-of-illness studies should be taken into account using methods, allowing stakeholders to find suitable studies and validate estimates.

  17. Description of the Process Model for the Technoeconomic Evaluation of MEA versus Mixed Amines for Carbon Dioxide Removal from Stack Gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Dale A.

    This model description is supplemental to the Lawrence Livermore National Laboratory (LLNL) report LLNL-TR-642494, Technoeconomic Evaluation of MEA versus Mixed Amines for CO2 Removal at Near- Commercial Scale at Duke Energy Gibson 3 Plant. We describe the assumptions and methodology used in the Laboratory’s simulation of its understanding of Huaneng’s novel amine solvent for CO2 capture with 35% mixed amine. The results of that simulation have been described in LLNL-TR-642494. The simulation was performed using ASPEN 7.0. The composition of the Huaneng’s novel amine solvent was estimated based on information gleaned from Huaneng patents. The chemistry of the process wasmore » described using nine equations, representing reactions within the absorber and stripper columns using the ELECTNRTL property method. As a rate-based ASPEN simulation model was not available to Lawrence Livermore at the time of writing, the height of a theoretical plate was estimated using open literature for similar processes. Composition of the flue gas was estimated based on information supplied by Duke Energy for Unit 3 of the Gibson plant. The simulation was scaled at one million short tons of CO2 absorbed per year. To aid stability of the model, convergence of the main solvent recycle loop was implemented manually, as described in the Blocks section below. Automatic convergence of this loop led to instability during the model iterations. Manual convergence of the loop enabled accurate representation and maintenance of model stability.« less

  18. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  19. A Robust and Multi-Weighted Approach to Estimating Topographically Correlated Tropospheric Delays in Radar Interferograms

    PubMed Central

    Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei

    2016-01-01

    Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066

  20. A Robust and Multi-Weighted Approach to Estimating Topographically Correlated Tropospheric Delays in Radar Interferograms.

    PubMed

    Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei

    2016-07-12

    Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.

  1. Determination of variability in leaf biomass densities of conifers and mixed conifers under different environmental conditions in the San Joaquin Valley air basin. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, P.J.; Mutters, R.J.; Adams, C.

    1995-06-01

    Biomass sampling plots were established at 29 locations within the dominant vegetation zones of the study area. Estimates of foliar biomass were made for each plot by three independent methods: regression analysis on the basis of tree diameter, calculation of the amount of light intercepted by the leaf canopy, and extrapolation from branch leaf area. Multivariate regression analysis was used to relate these foliar biomass estimates for oak plots and conifer plots to several independent predictor variables, including elevation, slope, aspect, temperature, precipitation, and soil chemical characteristics.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chong

    The electrical potential difference has been estimated across the mixing region of two plasmas with different degrees of ionization. The estimation has been carried out in two different contexts of a charge neutral mixing region and a charge non-neutral sheath. Ion energy gained due to the potential difference has also been estimated. In both analyses, ion energy gain is proportional to the degree of ionization, and a fairly large ionization appears to be needed for overcoming the potential energy barrier of strongly coupled plasmas.

  3. Clustering of longitudinal data by using an extended baseline: A new method for treatment efficacy clustering in longitudinal data.

    PubMed

    Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine

    2018-01-01

    Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.

  4. Health-Related Quality-of-Life Findings for the Prostate Cancer Prevention Trial

    PubMed Central

    2012-01-01

    Background The Prostate Cancer Prevention Trial (PCPT)—a randomized placebo-controlled study of the efficacy of finasteride in preventing prostate cancer—offered the opportunity to prospectively study effects of finasteride and other covariates on the health-related quality of life of participants in a multiyear trial. Methods We assessed three health-related quality-of-life domains (measured with the Health Survey Short Form–36: Physical Functioning, Mental Health, and Vitality scales) via questionnaires completed by PCPT participants at enrollment (3 months before randomization), at 6 months after randomization, and annually for 7 years. Covariate data obtained at enrollment from patient-completed questionnaires were included in our model. Mixed-effects model analyses and a cross-sectional presentation at three time points began at 6 months after randomization. All statistical tests were two-sided. Results For the physical function outcome (n = 16 077), neither the finasteride main effect nor the finasteride interaction with time were statistically significant. The effects of finasteride on physical function were minor and accounted for less than a 1-point difference over time in Physical Functioning scores (mixed-effect estimate = 0.07, 95% confidence interval [CI] = −0.28 to 0.42, P = .71). Comorbidities such as congestive heart failure (estimate = −5.64, 95% CI = −7.96 to −3.32, P < .001), leg pain (estimate = −2.57, 95% CI = −3.04 to −2.10, P < .001), and diabetes (estimate = −1.31, 95% CI = −2.04 to −0.57, P < .001) had statistically significant negative effects on physical function, as did current smoking (estimate = −2.34, 95% CI = −2.97 to −1.71, P < .001) and time on study (estimate = −1.20, 95% CI = −1.36 to −1.03, P < .001). Finasteride did not have a statistically significant effect on the other two dependent variables, mental health and vitality, either in the mixed-effects analyses or in the cross-sectional analysis at any of the three time points. Conclusion Finasteride did not negatively affect SF–36 Physical Functioning, Mental Health, or Vitality scores. PMID:22972968

  5. Favre-Averaged Turbulence Statistics in Variable Density Mixing of Buoyant Jets

    NASA Astrophysics Data System (ADS)

    Charonko, John; Prestridge, Kathy

    2014-11-01

    Variable density mixing of a heavy fluid jet with lower density ambient fluid in a subsonic wind tunnel was experimentally studied using Particle Image Velocimetry and Planar Laser Induced Fluorescence to simultaneously measure velocity and density. Flows involving the mixing of fluids with large density ratios are important in a range of physical problems including atmospheric and oceanic flows, industrial processes, and inertial confinement fusion. Here we focus on buoyant jets with coflow. Results from two different Atwood numbers, 0.1 (Boussinesq limit) and 0.6 (non-Boussinesq case), reveal that buoyancy is important for most of the turbulent quantities measured. Statistical characteristics of the mixing important for modeling these flows such as the PDFs of density and density gradients, turbulent kinetic energy, Favre averaged Reynolds stress, turbulent mass flux velocity, density-specific volume correlation, and density power spectra were also examined and compared with previous direct numerical simulations. Additionally, a method for directly estimating Reynolds-averaged velocity statistics on a per-pixel basis is extended to Favre-averages, yielding improved accuracy and spatial resolution as compared to traditional post-processing of velocity and density fields.

  6. Gradients estimation from random points with volumetric tensor in turbulence

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  7. A revised and unified pressure-clamp/relaxation theory for studying plant cell water relations with pressure probes: in-situ determination of cell volume for calculation of volumetric elastic modulus and hydraulic conductivity.

    PubMed

    Knipfer, T; Fei, J; Gambetta, G A; Shackel, K A; Matthews, M A

    2014-10-21

    The cell-pressure-probe is a unique tool to study plant water relations in-situ. Inaccuracy in the estimation of cell volume (νo) is the major source of error in the calculation of both cell volumetric elastic modulus (ε) and cell hydraulic conductivity (Lp). Estimates of νo and Lp can be obtained with the pressure-clamp (PC) and pressure-relaxation (PR) methods. In theory, both methods should result in comparable νo and Lp estimates, but this has not been the case. In this study, the existing νo-theories for PC and PR methods were reviewed and clarified. A revised νo-theory was developed that is equally valid for the PC and PR methods. The revised theory was used to determine νo for two extreme scenarios of solute mixing between the experimental cell and sap in the pressure probe microcapillary. Using a fully automated cell-pressure-probe (ACPP) on leaf epidermal cells of Tradescantia virginiana, the validity of the revised theory was tested with experimental data. Calculated νo values from both methods were in the range of optically determined νo (=1.1-5.0nL) for T. virginiana. However, the PC method produced a systematically lower (21%) calculated νo compared to the PR method. Effects of solute mixing could only explain a potential error in calculated νo of <3%. For both methods, this discrepancy in νo was almost identical to the discrepancy in the measured ratio of ΔV/ΔP (total change in microcapillary sap volume versus corresponding change in cell turgor) of 19%, which is a fundamental parameter in calculating νo. It followed from the revised theory that the ratio of ΔV/ΔP was inversely related to the solute reflection coefficient. This highlighted that treating the experimental cell as an ideal osmometer in both methods is potentially not correct. Effects of non-ideal osmotic behavior by transmembrane solute movement may be minimized in the PR as compared to the PC method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Reliance on condoms for contraceptive protection among HIV care and treatment clients: a mixed methods study on contraceptive choice and motivation within a generalised epidemic

    PubMed Central

    Church, Kathryn; Wringe, Alison; Fakudze, Phelele; Kikuvi, Joshua; Nhlabatsi, Zelda; Masuku, Rachel; Initiative, Integra; Mayhew, Susannah H

    2014-01-01

    Objectives To (i) describe the contraceptive practices of HIV care and treatment (HCTx) clients in Manzini, Swaziland, including their unmet needs for family planning (FP), and compare these with population-level estimates; and (ii) qualitatively explore the causal factors influencing contraceptive choice and use. Methods Mixed quantitative and qualitative methods were used. A cross-sectional survey conducted among HCTx clients (N=611) investigated FP and condom use patterns. Using descriptive statistics, findings were compared with population-level estimates derived from Swaziland Demographic and Health Survey data, weighted for clustering. In-depth interviews were conducted with HCTx providers (n=16) and clients (n=22) and analysed thematically. Results 64% of HCTx clients reported current contraceptive use; most relied on condoms alone, few practiced dual method use. Rates of condom use for FP among female HCTx clients (77%, 95% CI 71% to 82%) were higher than population-level estimates in the study region (50% HIV-positive, 95% CI 43% to 57%; 37% HIV-negative, 95% CI 31% to 43%); rates of unmet FP needs were similar when condom use consistency was accounted for (32% HCTx, 95% CI 26% to 37%; vs 35% HIV-positive, 95% CI 28% to 43%; 29% HIV-negative, 95% CI 24% to 35%). Qualitative analysis identified motivational factors influencing FP choice: fears of reinfection; a programmatic focus on condoms for people living with HIV; changing sexual behaviours before and after antiretroviral therapy (ART) initiation; failure to disclose to partners; and contraceptive side effect fears. Conclusions Fears of reinfection prevailed over consideration of pregnancy risk. Given current evidence on reinfection, HCTx services must move beyond a narrow focus on condom promotion, particularly for those in seroconcordant relationships, and consider diverse strategies to meet reproductive needs. PMID:24695990

  9. Methods of estimating or accounting for neighborhood associations with health using complex survey data.

    PubMed

    Brumback, Babette A; Cai, Zhuangyu; Dailey, Amy B

    2014-05-15

    Reasons for health disparities may include neighborhood-level factors, such as availability of health services, social norms, and environmental determinants, as well as individual-level factors. Investigating health inequalities using nationally or locally representative data often requires an approach that can accommodate a complex sampling design, in which individuals have unequal probabilities of selection into the study. The goal of the present article is to review and compare methods of estimating or accounting for neighborhood influences with complex survey data. We considered 3 types of methods, each generalized for use with complex survey data: ordinary regression, conditional likelihood regression, and generalized linear mixed-model regression. The relative strengths and weaknesses of each method differ from one study to another; we provide an overview of the advantages and disadvantages of each method theoretically, in terms of the nature of the estimable associations and the plausibility of the assumptions required for validity, and also practically, via a simulation study and 2 epidemiologic data analyses. The first analysis addresses determinants of repeat mammography screening use using data from the 2005 National Health Interview Survey. The second analysis addresses disparities in preventive oral health care using data from the 2008 Florida Behavioral Risk Factor Surveillance System Survey.

  10. Lipid-Based Immuno-Magnetic Separation of Archaea from a Mixed Community

    NASA Astrophysics Data System (ADS)

    Frickle, C. M.; Bailey, J.; Lloyd, K. G.; Shumaker, A.; Flood, B.

    2014-12-01

    Despite advancing techniques in microbiology, an estimated 98% of all microbial species on Earth have yet to be isolated in pure culture. Natural samples, once transferred to the lab, are commonly overgrown by "weed" species whose metabolic advantages enable them to monopolize available resources. Developing new methods for the isolation of thus-far uncultivable microorganisms would allow us to better understand their ecology, physiology and genetic potential. Physically separating target organisms from a mixed community is one approach that may allow enrichment and growth of the desired strain. Here we report on a novel method that uses known physiological variations between taxa, in this case membrane lipids, to segregate the desired organisms while keeping them alive and viable for reproduction. Magnetic antibodies bound to the molecule squalene, which is found in the cell membranes of certain archaea, but not bacteria, enable separation of archaea from bacteria in mixed samples. Viability of cells was tested by growing the separated fractions in batch culture. Efficacy and optimization of the antibody separation technique are being evaluated using qPCR and cell counts. Future work will apply this new separation technique to natural samples.

  11. The Prediction and Analysis of Jet Flows and Scattered Turbulent Mixing Noise about Flight Vehicle Airframes

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2014-01-01

    Jet flows interacting with nearby surfaces exhibit a complex behavior in which acoustic and aerodynamic characteristics are altered. The physical understanding and prediction of these characteristics are essential to designing future low noise aircraft. A new approach is created for predicting scattered jet mixing noise that utilizes an acoustic analogy and steady Reynolds-averaged Navier-Stokes solutions. A tailored Green's function accounts for the propagation of mixing noise about the airframe and is calculated numerically using a newly developed ray tracing method. The steady aerodynamic statistics, associated unsteady sound source, and acoustic intensity are examined as jet conditions are varied about a large flat plate. A non-dimensional number is proposed to estimate the effect of the aerodynamic noise source relative to jet operating condition and airframe position.The steady Reynolds-averaged Navier-Stokes solutions, acoustic analogy, tailored Green's function, non-dimensional number, and predicted noise are validated with a wide variety of measurements. The combination of the developed theory, ray tracing method, and careful implementation in a stand-alone computer program result in an approach that is more first principles oriented than alternatives, computationally efficient, and captures the relevant physics of fluid-structure interaction.

  12. The Prediction and Analysis of Jet Flows and Scattered Turbulent Mixing Noise About Flight Vehicle Airframes

    NASA Technical Reports Server (NTRS)

    Miller, Steven A.

    2014-01-01

    Jet flows interacting with nearby surfaces exhibit a complex behavior in which acoustic and aerodynamic characteristics are altered. The physical understanding and prediction of these characteristics are essential to designing future low noise aircraft. A new approach is created for predicting scattered jet mixing noise that utilizes an acoustic analogy and steady Reynolds-averaged Navier-Stokes solutions. A tailored Green's function accounts for the propagation of mixing noise about the air-frame and is calculated numerically using a newly developed ray tracing method. The steady aerodynamic statistics, associated unsteady sound source, and acoustic intensity are examined as jet conditions are varied about a large at plate. A non-dimensional number is proposed to estimate the effect of the aerodynamic noise source relative to jet operating condition and airframe position. The steady Reynolds-averaged Navier-Stokes solutions, acoustic analogy, tailored Green's function, non- dimensional number, and predicted noise are validated with a wide variety of measurements. The combination of the developed theory, ray tracing method, and careful implementation in a stand-alone computer program result in an approach that is more first principles oriented than alternatives, computationally efficient, and captures the relevant physics of fluid-structure interaction.

  13. The application of mixed methods designs to trauma research.

    PubMed

    Creswell, John W; Zhang, Wanqing

    2009-12-01

    Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.

  14. Managing Pacific salmon escapements: The gaps between theory and reality

    USGS Publications Warehouse

    Knudsen, E. Eric; Knudsen, E. Eric; Steward, Cleveland R.; MacDonald, Donald D.; Williams, Jack E.; Reiser, Dudley W.

    1999-01-01

    There are myriad challenges to estimating intrinsic production capacity for Pacific salmon populations that are heavily exploited and/or suffering from habitat alteration. Likewise, it is difficult to determine whether perceived decreases in production are due to harvest, habitat, or hatchery influences, natural variation, or some combination of all four. There are dramatic gaps between the true nature of the salmon spawner/recruit relationship and the theoretical basis for describing and understanding the relationship. Importantly, there are also extensive practical difficulties associated with gathering and interpreting accurate escapement and run-size information and applying it to population management. Paradoxically, certain aspects of salmon management may well be contributing to losses in abundance and biodiversity, including harvesting salmon in mixed population fisheries, grouping populations into management units subject to a common harvest rate, and fully exploiting all available hatchery fish at the expense of wild fish escapements. Information on U.S. Pacific salmon escapement goal-setting methods, escapement data collection methods and estimation types, and the degree to which stocks are subjected to mixed stock fisheries was summarized and categorized for 1,025 known management units consisting of 9,430 known populations. Using criteria developed in this study, only 1% of U.S. escapement goals are by methods rated as excellent. Escapement goals for 16% of management units were rated as good. Over 60% of escapement goals have been set by methods rated as either fair or poor and 22% of management units have no escapement goals at all. Of the 9,430 populations for which any information was available, 6,614 (70%) had sufficient information to categorize the method by which escapement data are collected. Of those, data collection methods were rated as excellent for 1%, good for 1%, fair for 2%, and poor for 52%. Escapement estimates are not made for 44% of populations. Escapement estimation type (quality of the data resulting from survey methods) was rated as excellent for <1%, good for 30%, fair for 3%, poor for 22%, and nonexistent for 45%. Numerous recommendations for improvements in escapement mangement are made in this chapter. In general, improvements are needed on theoretical escapement management techniques, escapement goal setting methods, and escapement and run size data quality. There is also a need to change managers' and harvesters' expectations to coincide with the natural variation and uncertainty in the abundance of salmon populations. All the recommendations are aimed at optimizing the number of spawners-healthy escapements ensure salmon sustainability by providing eggs for future production, nutrients to the system, and genetic diversity.

  15. Sample size considerations for paired experimental design with incomplete observations of continuous outcomes.

    PubMed

    Zhu, Hong; Xu, Xiaohan; Ahn, Chul

    2017-01-01

    Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.

  16. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  17. Estimating contact rates at a mass gathering by using video analysis: a proof-of-concept project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainey, Jeanette J.; Cheriyadat, Anil; Radke, Richard J.

    Current approaches for estimating social mixing patterns and infectious disease transmission at mass gatherings have been limited by various constraints, including low participation rates for volunteer-based research projects and challenges in quantifying spatially and temporally accurate person-to-person interactions. We developed a proof-of-concept project to assess the use of automated video analysis for estimating contact rates of attendees of the GameFest 2013 event at Rensselaer Polytechnic Institute (RPI) in Troy, New York. Video tracking and analysis algorithms were used to estimate the number and duration of contacts for 5 attendees during a 3-minute clip from the RPI video. Attendees were consideredmore » to have a contact event if the distance between them and another person was ≤1 meter. Contact duration was estimated in seconds. We also simulated 50 attendees assuming random mixing using a geospatially accurate representation of the same GameFest location. The 5 attendees had an overall median of 2 contact events during the 3-minute video clip (range: 0 6). Contact events varied from less than 5 seconds to the full duration of the 3- minute clip. The random mixing simulation was visualized and presented as a contrasting example. We were able to estimate the number and duration of contacts for five GameFest attendees from a 3-minute video clip that can be compared to a random mixing simulation model at the same location. In conclusion, the next phase will involve scaling the system for simultaneous analysis of mixing patterns from hours-long videos and comparing our results with other approaches for collecting contact data from mass gathering attendees.« less

  18. Estimating contact rates at a mass gathering by using video analysis: a proof-of-concept project

    DOE PAGES

    Rainey, Jeanette J.; Cheriyadat, Anil; Radke, Richard J.; ...

    2014-10-24

    Current approaches for estimating social mixing patterns and infectious disease transmission at mass gatherings have been limited by various constraints, including low participation rates for volunteer-based research projects and challenges in quantifying spatially and temporally accurate person-to-person interactions. We developed a proof-of-concept project to assess the use of automated video analysis for estimating contact rates of attendees of the GameFest 2013 event at Rensselaer Polytechnic Institute (RPI) in Troy, New York. Video tracking and analysis algorithms were used to estimate the number and duration of contacts for 5 attendees during a 3-minute clip from the RPI video. Attendees were consideredmore » to have a contact event if the distance between them and another person was ≤1 meter. Contact duration was estimated in seconds. We also simulated 50 attendees assuming random mixing using a geospatially accurate representation of the same GameFest location. The 5 attendees had an overall median of 2 contact events during the 3-minute video clip (range: 0 6). Contact events varied from less than 5 seconds to the full duration of the 3- minute clip. The random mixing simulation was visualized and presented as a contrasting example. We were able to estimate the number and duration of contacts for five GameFest attendees from a 3-minute video clip that can be compared to a random mixing simulation model at the same location. In conclusion, the next phase will involve scaling the system for simultaneous analysis of mixing patterns from hours-long videos and comparing our results with other approaches for collecting contact data from mass gathering attendees.« less

  19. Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.

    PubMed

    Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan

    2015-03-01

    A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.

  20. Canopy-scale flux measurements and bottom-up emission estimates of volatile organic compounds from a mixed oak and hornbeam forest in northern Italy

    NASA Astrophysics Data System (ADS)

    Acton, W. Joe F.; Schallhart, Simon; Langford, Ben; Valach, Amy; Rantala, Pekka; Fares, Silvano; Carriero, Giulia; Tillmann, Ralf; Tomlinson, Sam J.; Dragosits, Ulrike; Gianelle, Damiano; Hewitt, C. Nicholas; Nemitz, Eiko

    2016-06-01

    This paper reports the fluxes and mixing ratios of biogenically emitted volatile organic compounds (BVOCs) 4 m above a mixed oak and hornbeam forest in northern Italy. Fluxes of methanol, acetaldehyde, isoprene, methyl vinyl ketone + methacrolein, methyl ethyl ketone and monoterpenes were obtained using both a proton-transfer-reaction mass spectrometer (PTR-MS) and a proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) together with the methods of virtual disjunct eddy covariance (using PTR-MS) and eddy covariance (using PTR-ToF-MS). Isoprene was the dominant emitted compound with a mean daytime flux of 1.9 mg m-2 h-1. Mixing ratios, recorded 4 m above the canopy, were dominated by methanol with a mean value of 6.2 ppbv over the 28-day measurement period. Comparison of isoprene fluxes calculated using the PTR-MS and PTR-ToF-MS showed very good agreement while comparison of the monoterpene fluxes suggested a slight over estimation of the flux by the PTR-MS. A basal isoprene emission rate for the forest of 1.7 mg m-2 h-1 was calculated using the Model of Emissions of Gases and Aerosols from Nature (MEGAN) isoprene emission algorithms (Guenther et al., 2006). A detailed tree-species distribution map for the site enabled the leaf-level emission of isoprene and monoterpenes recorded using gas-chromatography mass spectrometry (GC-MS) to be scaled up to produce a bottom-up canopy-scale flux. This was compared with the top-down canopy-scale flux obtained by measurements. For monoterpenes, the two estimates were closely correlated and this correlation improved when the plant-species composition in the individual flux footprint was taken into account. However, the bottom-up approach significantly underestimated the isoprene flux, compared with the top-down measurements, suggesting that the leaf-level measurements were not representative of actual emission rates.

  1. Canopy-scale flux measurements and bottom-up emission estimates of volatile organic compounds from a mixed oak and hornbeam forest in northern Italy

    NASA Astrophysics Data System (ADS)

    Acton, W. J. F.; Schallhart, S.; Langford, B.; Valach, A.; Rantala, P.; Fares, S.; Carriero, G.; Tillmann, R.; Tomlinson, S. J.; Dragosits, U.; Gianelle, D.; Hewitt, C. N.; Nemitz, E.

    2015-10-01

    This paper reports the fluxes and mixing ratios of biogenically emitted volatile organic compounds (BVOCs) 4 m above a mixed oak and hornbeam forest in northern Italy. Fluxes of methanol, acetaldehyde, isoprene, methyl vinyl ketone + methacrolein, methyl ethyl ketone and monoterpenes were obtained using both a proton transfer reaction-mass spectrometer (PTR-MS) and a proton transfer reaction-time of flight-mass spectrometer (PTR-ToF-MS) together with the methods of virtual disjunct eddy covariance (PTR-MS) and eddy covariance (PTR-ToF-MS). Isoprene was the dominant emitted compound with a mean day-time flux of 1.9 mg m-2 h-1. Mixing ratios, recorded 4 m above the canopy, were dominated by methanol with a mean value of 6.2 ppbv over the 28 day measurement period. Comparison of isoprene fluxes calculated using the PTR-MS and PTR-ToF-MS showed very good agreement while comparison of the monoterpene fluxes suggested a slight over estimation of the flux by the PTR-MS. A basal isoprene emission rate for the forest of 1.7 mg m-2 h-1 was calculated using the MEGAN isoprene emissions algorithms (Guenther et al., 2006). A detailed tree species distribution map for the site enabled the leaf-level emissions of isoprene and monoterpenes recorded using GC-MS to be scaled up to produce a "bottom-up" canopy-scale flux. This was compared with the "top-down" canopy-scale flux obtained by measurements. For monoterpenes, the two estimates were closely correlated and this correlation improved when the plant species composition in the individual flux footprint was taken into account. However, the bottom-up approach significantly underestimated the isoprene flux, compared with the top-down measurements, suggesting that the leaf-level measurements were not representative of actual emission rates.

  2. Estimating Mixing Heights Using Microwave Temperature Profiler

    NASA Technical Reports Server (NTRS)

    Nielson-Gammon, John; Powell, Christina; Mahoney, Michael; Angevine, Wayne

    2008-01-01

    A paper describes the Microwave Temperature Profiler (MTP) for making measurements of the planetary boundary layer thermal structure data necessary for air quality forecasting as the Mixing Layer (ML) height determines the volume in which daytime pollution is primarily concentrated. This is the first time that an airborne temperature profiler has been used to measure the mixing layer height. Normally, this is done using a radar wind profiler, which is both noisy and large. The MTP was deployed during the Texas 2000 Air Quality Study (TexAQS-2000). An objective technique was developed and tested for estimating the ML height from the MTP vertical temperature profiles. In order to calibrate the technique and evaluate the usefulness of this approach, estimates from a variety of measurements during the TexAQS-2000 were compared. Estimates of ML height were used from radiosondes, radar wind profilers, an aerosol backscatter lidar, and in-situ aircraft measurements in addition to those from the MTP.

  3. Understanding the Essential Meaning of Measured Changes in Weight and Body Composition Among Women During and After Adjuvant Treatment for Breast Cancer: A Mixed-Methods Study.

    PubMed

    Pedersen, Birgith; Groenkjaer, Mette; Falkmer, Ursula; Delmar, Charlotte

    Changes in weight and body composition among women during and after adjuvant antineoplastic treatment for breast cancer may influence long-term survival and quality of life. Research on factual weight changes is diverse and contrasting, and their influence on women's perception of body and self seems to be insufficiently explored. The aim of this study was to expand the understanding of the association between changes in weight and body composition and the women's perception of body and selves. A mixed-methods research design was used. Data consisted of weight and body composition measures from 95 women with breast cancer during 18 months past surgery. Twelve women from this cohort were interviewed individually at 12 months. Linear mixed model and logistic regression were used to estimate changes of repeated measures and odds ratio. Interviews were analyzed guided by existential phenomenology. Joint displays and integrative mixed-methods interpretation demonstrated that even small weight gains, extended waist, and weight loss were associated with fearing recurrence of breast cancer. Perceiving an ambiguous transforming body, the women moved between a unified body subject and the body as an object dissociated in "I" and "it" while fighting against or accepting the body changes. Integrating findings demonstrated that factual weight changes do not correspond with the perceived changes and may trigger existential threats. Transition to a new habitual body demand health practitioners to enter a joint narrative work to reveal how the changes impact on the women's body and self-perception independent of how they are displayed quantitatively.

  4. Synthesis of oxidized guar gum by dry method and its application in reactive dye printing.

    PubMed

    Gong, Honghong; Liu, Mingzhu; Zhang, Bing; Cui, Dapeng; Gao, Chunmei; Ni, Boli; Chen, Jiucun

    2011-12-01

    The aim of this study was to prepare oxidized guar gum with a simple dry method, basing on guar gum, hydrogen peroxide and a small amount of solvent. To obtain a product with suitable viscosity for reactive dye printing, the effects of various factors such as the amount of oxidant and solvent, reaction temperature and time were studied with respect to the viscosity of reaction products. The product was characterized by Fourier transform infrared spectroscopy, size exclusion chromatography, scanning electron microscopy and differential scanning calorimetry. The hydrated rate of guar gum and oxidized guar gum was estimated through measuring the required time when their solutions (1%, w/v) reached the maximum viscosity. The effects of the salt concentration and pH on viscosity of the resultant product were studied. The mixed paste containing oxidized guar gum and carboxymethyl starch was prepared and its viscosity was determined by the viscometer. The rheological property of the mixed paste was appraised by the printing viscosity index. In addition, the applied effect of mixed paste in reactive dye printing was examined by assessing the fabric stiffness, color yield and sharp edge to the printed image in comparison with sodium alginate. And the results indicated that the mixed paste could partially replace sodium alginate as thickener in reactive dye printing. The study also showed that the method was low cost and eco-friendly and the product would have an extensive application in reactive dye printing. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Digital pulse shape discrimination.

    PubMed

    Miller, L F; Preston, J; Pozzi, S; Flaska, M; Neal, J

    2007-01-01

    Pulse-shape discrimination (PSD) has been utilised for about 40 years as a method to obtain estimates for dose in mixed neutron and photon fields. Digitizers that operate close to GHz are currently available at a reasonable cost, and they can be used to directly sample signals from photomultiplier tubes. This permits one to perform digital PSD rather than the traditional, and well-established, analogoue techniques. One issue that complicates PSD for neutrons in mixed fields is that the light output characteristics of typical scintillators available for PSD, such as BC501A, vary as a function of energy deposited in the detector. This behaviour is more easily accommodated with digital processing of signals than with analogoue signal processing. Results illustrate the effectiveness of digital PSD.

  6. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Effects of complex interventions in ‘skin cancer prevention and treatment’: protocol for a mixed-method systematic review with qualitative comparative analysis

    PubMed Central

    Breitbart, Eckhard; Köberlein-Neu, Juliane

    2017-01-01

    Introduction Occurring from ultraviolet radiation combined with impairing ozone levels, uncritical sun exposure and use of tanning beds an increasing number of people are affected by different types of skin cancer. But preventive interventions like skin cancer screening are still missing the evidence for effectiveness and therefore are criticised. Fundamental for an appropriate course of action is to approach the defined parameters as measures for effectiveness critically. A prerequisite should be the critical application of used parameter that are defined as measures for effectiveness. This research seeks to establish, through the available literature, the effects and conditions that prove the effectiveness of prevention strategies in skin cancer. Method and analysis A mixed-method approach is employed to combine quantitative to qualitative methods and answer what effects can display effectiveness considering time horizon, perspective and organisational level and what are essential and sufficient conditions to prove effectiveness and cost-effectiveness in skin cancer prevention strategies. A systematic review will be performed to spot studies from any design and assess the data quantitatively and qualitatively. Included studies from each key question will be summarised by characteristics like population, intervention, comparison, outcomes, study design, endpoints, effect estimator and so on. Beside statistical relevancies for a systematic review the qualitative method of qualitative comparative analysis (QCA) will be performed. The estimated outcomes from this review and QCA are the accomplishment and absence of effects that are appropriate for application in effectiveness assessments and further cost-effectiveness assessment. Ethics and dissemination Formal ethical approval is not required as primary data will not be collected. Trial registration number International Prospective Register for Systematic Reviews number CRD42017053859. PMID:28877950

  8. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    PubMed

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  9. Monochloramine Disinfection Kinetics of Nitrosomonas europaea by Propidium Monoazide Quantitative PCR and Live/Dead BacLight Methods▿

    PubMed Central

    Wahman, David G.; Wulfeck-Kleier, Karen A.; Pressman, Jonathan G.

    2009-01-01

    Monochloramine disinfection kinetics were determined for the pure-culture ammonia-oxidizing bacterium Nitrosomonas europaea (ATCC 19718) by two culture-independent methods, namely, Live/Dead BacLight (LD) and propidium monoazide quantitative PCR (PMA-qPCR). Both methods were first verified with mixtures of heat-killed (nonviable) and non-heat-killed (viable) cells before a series of batch disinfection experiments with stationary-phase cultures (batch grown for 7 days) at pH 8.0, 25°C, and 5, 10, and 20 mg Cl2/liter monochloramine. Two data sets were generated based on the viability method used, either (i) LD or (ii) PMA-qPCR. These two data sets were used to estimate kinetic parameters for the delayed Chick-Watson disinfection model through a Bayesian analysis implemented in WinBUGS. This analysis provided parameter estimates of 490 mg Cl2-min/liter for the lag coefficient (b) and 1.6 × 10−3 to 4.0 × 10−3 liter/mg Cl2-min for the Chick-Watson disinfection rate constant (k). While estimates of b were similar for both data sets, the LD data set resulted in a greater k estimate than that obtained with the PMA-qPCR data set, implying that the PMA-qPCR viability measure was more conservative than LD. For N. europaea, the lag phase was not previously reported for culture-independent methods and may have implications for nitrification in drinking water distribution systems. This is the first published application of a PMA-qPCR method for disinfection kinetic model parameter estimation as well as its application to N. europaea or monochloramine. Ultimately, this PMA-qPCR method will allow evaluation of monochloramine disinfection kinetics for mixed-culture bacteria in drinking water distribution systems. PMID:19561179

  10. Towards ultrasensitive malaria diagnosis using surface enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Keren; Yuen, Clement; Aniweh, Yaw; Preiser, Peter; Liu, Quan

    2016-02-01

    We report two methods of surface enhanced Raman spectroscopy (SERS) for hemozoin detection in malaria infected human blood. In the first method, silver nanoparticles were synthesized separately and then mixed with lysed blood; while in the second method, silver nanoparticles were synthesized directly inside the parasites of Plasmodium falciparum. It was observed that the first method yields a smaller variation in SERS measurements and stronger correlation between the estimated contribution of hemozoin and the parasitemia level, which is preferred for the quantification of the parasitemia level. In contrast, the second method yields a higher sensitivity to a low parasitemia level thus could be more effective in the early malaria diagnosis to determine whether a given blood sample is positive.

  11. Research misconduct oversight: defining case costs.

    PubMed

    Gammon, Elizabeth; Franzini, Luisa

    2013-01-01

    This study uses a sequential mixed method study design to define cost elements of research misconduct among faculty at academic medical centers. Using time driven activity based costing, the model estimates a per case cost for 17 cases of research misconduct reported by the Office of Research Integrity for the period of 2000-2005. Per case cost of research misconduct was found to range from $116,160 to $2,192,620. Research misconduct cost drivers are identified.

  12. Assessment of spatial discordance of primary and effective seed dispersal of European beech (Fagus sylvatica L.) by ecological and genetic methods.

    PubMed

    Millerón, M; López de Heredia, U; Lorenzo, Z; Alonso, J; Dounavi, A; Gil, L; Nanos, N

    2013-03-01

    Spatial discordance between primary and effective dispersal in plant populations indicates that postdispersal processes erase the seed rain signal in recruitment patterns. Five different models were used to test the spatial concordance of the primary and effective dispersal patterns in a European beech (Fagus sylvatica) population from central Spain. An ecological method was based on classical inverse modelling (SSS), using the number of seed/seedlings as input data. Genetic models were based on direct kernel fitting of mother-to-offspring distances estimated by a parentage analysis or were spatially explicit models based on the genotype frequencies of offspring (competing sources model and Moran-Clark's Model). A fully integrated mixed model was based on inverse modelling, but used the number of genotypes as input data (gene shadow model). The potential sources of error and limitations of each seed dispersal estimation method are discussed. The mean dispersal distances for seeds and saplings estimated with these five methods were higher than those obtained by previous estimations for European beech forests. All the methods show strong discordance between primary and effective dispersal kernel parameters, and for dispersal directionality. While seed rain was released mostly under the canopy, saplings were established far from mother trees. This discordant pattern may be the result of the action of secondary dispersal by animals or density-dependent effects; that is, the Janzen-Connell effect. © 2013 Blackwell Publishing Ltd.

  13. Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.

    PubMed

    Korth, Martin

    2013-10-14

    The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.

  14. Mixed methods in gerontological research: Do the qualitative and quantitative data “touch”?

    PubMed Central

    Happ, Mary Beth

    2010-01-01

    This paper distinguishes between parallel and integrated mixed methods research approaches. Barriers to integrated mixed methods approaches in gerontological research are discussed and critiqued. The author presents examples of mixed methods gerontological research to illustrate approaches to data integration at the levels of data analysis, interpretation, and research reporting. As a summary of the methodological literature, four basic levels of mixed methods data combination are proposed. Opportunities for mixing qualitative and quantitative data are explored using contemporary examples from published studies. Data transformation and visual display, judiciously applied, are proposed as pathways to fuller mixed methods data integration and analysis. Finally, practical strategies for mixing qualitative and quantitative data types are explicated as gerontological research moves beyond parallel mixed methods approaches to achieve data integration. PMID:20077973

  15. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification.

    PubMed

    Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B

    2015-06-01

    The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  16. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.

  17. Methodological quality and reporting of generalized linear mixed models in clinical medicine (2000-2012): a systematic review.

    PubMed

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L

    2014-01-01

    Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.

  18. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  19. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  20. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

Top