Sample records for obtain consistent estimates

  1. Consistency of extreme flood estimation approaches

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  2. Posterior consistency in conditional distribution estimation

    PubMed Central

    Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.

    2014-01-01

    A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858

  3. Robust versus consistent variance estimators in marginal structural Cox models.

    PubMed

    Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris

    2018-06-11

    In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  5. Probability machines: consistent probability estimation using nonparametric learning machines.

    PubMed

    Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A

    2012-01-01

    Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.

  6. A Nonparametric Approach to Estimate Classification Accuracy and Consistency

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2014-01-01

    When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…

  7. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  8. Dictionary-based fiber orientation estimation with improved spatial consistency.

    PubMed

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  10. Estimates of the Internal Consistency of a Factorially Complex Composite.

    ERIC Educational Resources Information Center

    Benito, Juana Gomez

    1989-01-01

    This study of 852 subjects in Barcelona (Spain) between 4 and 9 years old estimated the degree of consistency among elements of the Borelli-Oleron Performance Scale by taking into account item clusters and subtest clusters. The internal consistency of the subtests rose when all ages were analyzed jointly. (SLD)

  11. Consistently estimating absolute risk difference when translating evidence to jurisdictions of interest.

    PubMed

    Eckermann, Simon; Coory, Michael; Willan, Andrew R

    2011-02-01

    Economic analysis and assessment of net clinical benefit often requires estimation of absolute risk difference (ARD) for binary outcomes (e.g. survival, response, disease progression) given baseline epidemiological risk in a jurisdiction of interest and trial evidence of treatment effects. Typically, the assumption is made that relative treatment effects are constant across baseline risk, in which case relative risk (RR) or odds ratios (OR) could be applied to estimate ARD. The objective of this article is to establish whether such use of RR or OR allows consistent estimates of ARD. ARD is calculated from alternative framing of effects (e.g. mortality vs survival) applying standard methods for translating evidence with RR and OR. For RR, the RR is applied to baseline risk in the jurisdiction to estimate treatment risk; for OR, the baseline risk is converted to odds, the OR applied and the resulting treatment odds converted back to risk. ARD is shown to be consistently estimated with OR but changes with framing of effects using RR wherever there is a treatment effect and epidemiological risk differs from trial risk. Additionally, in indirect comparisons, ARD is shown to be consistently estimated with OR, while calculation with RR allows inconsistency, with alternative framing of effects in the direction, let alone the extent, of ARD. OR ensures consistent calculation of ARD in translating evidence from trial settings and across trials in direct and indirect comparisons, avoiding inconsistencies from RR with alternative outcome framing and associated biases. These findings are critical for consistently translating evidence to inform economic analysis and assessment of net clinical benefit, as translation of evidence is proposed precisely where the advantages of OR over RR arise.

  12. Consistent Estimation of Gibbs Energy Using Component Contributions

    PubMed Central

    Milo, Ron; Fleming, Ronan M. T.

    2013-01-01

    Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism. PMID:23874165

  13. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

    NASA Astrophysics Data System (ADS)

    Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim

    2016-08-01

    Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.

  14. An Innovative Method for Obtaining Consistent Images and Quantification of Histochemically Stained Specimens

    PubMed Central

    Sedgewick, Gerald J.; Ericson, Marna

    2015-01-01

    Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568

  15. Use of Internal Consistency Coefficients for Estimating Reliability of Experimental Tasks Scores

    PubMed Central

    Green, Samuel B.; Yang, Yanyun; Alt, Mary; Brinkley, Shara; Gray, Shelley; Hogan, Tiffany; Cowan, Nelson

    2017-01-01

    Reliabilities of scores for experimental tasks are likely to differ from one study to another to the extent that the task stimuli change, the number of trials varies, the type of individuals taking the task changes, the administration conditions are altered, or the focal task variable differs. Given reliabilities vary as a function of the design of these tasks and the characteristics of the individuals taking them, making inferences about the reliability of scores in an ongoing study based on reliability estimates from prior studies is precarious. Thus, it would be advantageous to estimate reliability based on data from the ongoing study. We argue that internal consistency estimates of reliability are underutilized for experimental task data and in many applications could provide this information using a single administration of a task. We discuss different methods for computing internal consistency estimates with a generalized coefficient alpha and the conditions under which these estimates are accurate. We illustrate use of these coefficients using data for three different tasks. PMID:26546100

  16. Consistent estimate of ocean warming, land ice melt and sea level rise from Observations

    NASA Astrophysics Data System (ADS)

    Blazquez, Alejandro; Meyssignac, Benoît; Lemoine, Jean Michel

    2016-04-01

    Based on the sea level budget closure approach, this study investigates the consistency of observed Global Mean Sea Level (GMSL) estimates from satellite altimetry, observed Ocean Thermal Expansion (OTE) estimates from in-situ hydrographic data (based on Argo for depth above 2000m and oceanic cruises below) and GRACE observations of land water storage and land ice melt for the period January 2004 to December 2014. The consistency between these datasets is a key issue if we want to constrain missing contributions to sea level rise such as the deep ocean contribution. Numerous previous studies have addressed this question by summing up the different contributions to sea level rise and comparing it to satellite altimetry observations (see for example Llovel et al. 2015, Dieng et al. 2015). Here we propose a novel approach which consists in correcting GRACE solutions over the ocean (essentially corrections of stripes and leakage from ice caps) with mass observations deduced from the difference between satellite altimetry GMSL and in-situ hydrographic data OTE estimates. We check that the resulting GRACE corrected solutions are consistent with original GRACE estimates of the geoid spherical harmonic coefficients within error bars and we compare the resulting GRACE estimates of land water storage and land ice melt with independent results from the literature. This method provides a new mass redistribution from GRACE consistent with observations from Altimetry and OTE. We test the sensibility of this method to the deep ocean contribution and the GIA models and propose best estimates.

  17. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  18. Estimating ages of white-tailed deer: Age and sex patterns of error using tooth wear-and-replacement and consistency of cementum annuli

    USGS Publications Warehouse

    Samuel, Michael D.; Storm, Daniel J.; Rolley, Robert E.; Beissel, Thomas; Richards, Bryan J.; Van Deelen, Timothy R.

    2014-01-01

    The age structure of harvested animals provides the basis for many demographic analyses. Ages of harvested white-tailed deer (Odocoileus virginianus) and other ungulates often are estimated by evaluating replacement and wear patterns of teeth, which is subjective and error-prone. Few previous studies however, examined age- and sex-specific error rates. Counting cementum annuli of incisors is an alternative, more accurate method of estimating age, but factors that influence consistency of cementum annuli counts are poorly known. We estimated age of 1,261 adult (≥1.5 yr old) white-tailed deer harvested in Wisconsin and Illinois (USA; 2005–2008) using both wear-and-replacement and cementum annuli. We compared cementum annuli with wear-and-replacement estimates to assess misclassification rates by sex and age. Wear-and-replacement for estimating ages of white-tailed deer resulted in substantial misclassification compared with cementum annuli. Age classes of females were consistently underestimated, while those of males were underestimated for younger age classes but overestimated for older age classes. Misclassification resulted in an impression of a younger age-structure than actually was the case. Additionally, we obtained paired age-estimates from cementum annuli for 295 deer. Consistency of paired cementum annuli age-estimates decreased with age, was lower in females than males, and decreased as age estimates became less certain. Our results indicated that errors in the wear-and-replacement techniques are substantial and could impact demographic analyses that use age-structure information. 

  19. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  20. Estimation of mountain slope stability depending on ground consistency and slip-slide resistance changes on impact of dynamic forces

    NASA Astrophysics Data System (ADS)

    Hayroyan, H. S.; Hayroyan, S. H.; Karapetyan, K. A.

    2018-04-01

    In this paper, three types of clayish soils with different consistency and humidity properties and slip-slide resistance indexes are considered on impact of different cyclic shear stresses. The side-surface deformation charts are constructed on the basis of experimental data obtained testing cylindrical soil samples. It is shown that the fluctuation amplitude depends on time and the consistency index depends on the humidity condition in the soil inner contact and the connectivity coefficients. Consequently, each experiment is interpreted. The main result of this research is that it is necessary to make corrections in the currently active schemes of slip-hazardous slopes stability estimation, which is a crucial problem requiring ASAP solution.

  1. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  2. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  3. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  4. Coupling gross primary production and transpiration for a consistent estimate of canopy water use efficiency

    NASA Astrophysics Data System (ADS)

    Yebra, Marta; van Dijk, Albert

    2015-04-01

    Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically

  5. The Consequences of Teenage Childbearing: Consistent Estimates When Abortion Makes Miscarriage Nonrandom*

    PubMed Central

    Ashcraft, Adam; Fernández-Val, Iván; Lang, Kevin

    2012-01-01

    Miscarriage, even if biologically random, is not socially random. Willingness to abort reduces miscarriage risk. Because abortions are favorably selected among pregnant teens, those miscarrying are less favorably selected than those giving birth or aborting but more favorably selected than those giving birth. Therefore, using miscarriage as an instrument is biased towards a benign view of teen motherhood while OLS on just those giving birth or miscarrying has the opposite bias. We derive a consistent estimator that reduces to a weighted average of OLS and IV when outcomes are independent of abortion timing. Estimated effects are generally adverse but modest. PMID:24443589

  6. Estimating True Short-Term Consistency in Vocational Interests: A Longitudinal SEM Approach

    ERIC Educational Resources Information Center

    Gaudron, Jean-Philippe; Vautier, Stephane

    2007-01-01

    This study aimed at estimating the correlation between true scores (true consistency) of vocational interest over a short time span in a sample of 1089 adults. Participants were administered 54 items assessing vocational, family, and leisure interests twice over a 1-month period. Responses were analyzed with a multitrait (MT) model, which supposes…

  7. Consistency between satellite-derived and modeled estimates of the direct aerosol effect.

    PubMed

    Myhre, Gunnar

    2009-07-10

    In the Intergovernmental Panel on Climate Change Fourth Assessment Report, the direct aerosol effect is reported to have a radiative forcing estimate of -0.5 Watt per square meter (W m(-2)), offsetting the warming from CO2 by almost one-third. The uncertainty, however, ranges from -0.9 to -0.1 W m(-2), which is largely due to differences between estimates from global aerosol models and observation-based estimates, with the latter tending to have stronger (more negative) radiative forcing. This study demonstrates consistency between a global aerosol model and adjustment to an observation-based method, producing a global and annual mean radiative forcing that is weaker than -0.5 W m(-2), with a best estimate of -0.3 W m(-2). The physical explanation for the earlier discrepancy is that the relative increase in anthropogenic black carbon (absorbing aerosols) is much larger than the overall increase in the anthropogenic abundance of aerosols.

  8. An Improved Internal Consistency Reliability Estimate.

    ERIC Educational Resources Information Center

    Cliff, Norman

    1984-01-01

    The proposed coefficient is derived by assuming that the average Goodman-Kruskal gamma between items of identical difficulty would be the same for items of different difficulty. An estimate of covariance between items of identical difficulty leads to an estimate of the correlation between two tests with identical distributions of difficulty.…

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  10. Temporally diffeomorphic cardiac motion estimation from three-dimensional echocardiography by minimization of intensity consistency error.

    PubMed

    Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo

    2014-05-01

    Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.

  11. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  12. [Estimators of internal consistency in health research: the use of the alpha coefficient].

    PubMed

    da Silva, Franciele Cascaes; Gonçalves, Elizandra; Arancibia, Beatriz Angélica Valdivia; Bento, Gisele Graziele; Castro, Thiago Luis da Silva; Hernandez, Salma Stephany Soleman; da Silva, Rudney

    2015-01-01

    Academic production has increased in the area of health, increasingly demanding high quality in publications of great impact. One of the ways to consider quality is through methods that increase the consistency of data analysis, such as reliability which, depending on the type of data, can be evaluated by different coefficients, especially the alpha coefficient. Based on this, the present review systematically gathers scientific articles produced in the last five years, which in a methodological manner gave the α coefficient psychometric use as an estimator of internal consistency and reliability in the processes of construction, adaptation and validation of instruments. The identification of the studies was conducted systematically in the databases BioMed Central Journals, Web of Science, Wiley Online Library, Medline, SciELO, Scopus, Journals@Ovid, BMJ and Springer, using inclusion and exclusion criteria. Data analyses were performed by means of triangulation, content analysis and descriptive analysis. It was found that most studies were conducted in Iran (f=3), Spain (f=2) and Brazil (f=2). These studies aimed to test the psychometric properties of instruments, with eight studies using the α coefficient to assess reliability and nine for assessing internal consistency. All studies were classified as methodological research when their objectives were analyzed. In addition, four studies were also classified as correlational and one as descriptive-correlational. It can be concluded that though the α coefficient is widely used as one of the main parameters for assessing internal consistency of questionnaires in health sciences, its use as an estimator of trust of the methodology used and internal consistency has some critiques that should be considered.

  13. Practical Issues in Estimating Classification Accuracy and Consistency with R Package cacIRT

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.

    2015-01-01

    There are two main lines of research in estimating classification accuracy (CA) and classification consistency (CC) under Item Response Theory (IRT). The R package cacIRT provides computer implementations of both approaches in an accessible and unified framework. Even with available implementations, there remains decisions a researcher faces when…

  14. Stability of individual loudness functions obtained by magnitude estimation and production

    NASA Technical Reports Server (NTRS)

    Hellman, R. P.

    1981-01-01

    A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.

  15. Internally Consistent MODIS Estimate of Aerosol Clear-Sky Radiative Effect Over the Global Oceans

    NASA Technical Reports Server (NTRS)

    Remer, Lorraine A.; Kaufman, Yoram J.

    2004-01-01

    Modern satellite remote sensing, and in particular the MODerate resolution Imaging Spectroradiometer (MODIS), offers a measurement-based pathway to estimate global aerosol radiative effects and aerosol radiative forcing. Over the Oceans, MODIS retrieves the total aerosol optical thickness, but also reports which combination of the 9 different aerosol models was used to obtain the retrieval. Each of the 9 models is characterized by a size distribution and complex refractive index, which through Mie calculations correspond to a unique set of single scattering albedo, assymetry parameter and spectral extinction for each model. The combination of these sets of optical parameters weighted by the optical thickness attributed to each model in the retrieval produces the best fit to the observed radiances at the top of the atmosphere. Thus the MODIS Ocean aerosol retrieval provides us with (1) An observed distribution of global aerosol loading, and (2) An internally-consistent, observed, distribution of aerosol optical models that when used in combination will best represent the radiances at the top of the atmosphere. We use these two observed global distributions to initialize the column climate model by Chou and Suarez to calculate the aerosol radiative effect at top of the atmosphere and the radiative efficiency of the aerosols over the global oceans. We apply the analysis to 3 years of MODIS retrievals from the Terra satellite and produce global and regional, seasonally varying, estimates of aerosol radiative effect over the clear-sky oceans.

  16. Estimation of brittleness indices for pay zone determination in a shale-gas reservoir by using elastic properties obtained from micromechanics

    NASA Astrophysics Data System (ADS)

    Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime

    2018-04-01

    The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.

  17. Periodic homogenization and consistent estimates of transport parameters through sphere and polyhedron packings in the whole porosity range.

    PubMed

    Boutin, Claude; Geindreau, Christian

    2010-09-01

    This paper presents a study of transport parameters (diffusion, dynamic permeability, thermal permeability, trapping constant) of porous media by combining the homogenization of periodic media (HPM) and the self-consistent scheme (SCM) based on a bicomposite spherical pattern. The link between the HPM and SCM approaches is first established by using a systematic argument independent of the problem under consideration. It is shown that the periodicity condition can be replaced by zero flux and energy through the whole surface of the representative elementary volume. Consequently the SCM solution can be considered as a geometrical approximation of the local problem derived through HPM for materials such that the morphology of the period is "close" to the SCM pattern. These results are then applied to derive the estimates of the effective diffusion, the dynamic permeability, the thermal permeability and the trapping constant of porous media. These SCM estimates are compared with numerical HPM results obtained on periodic arrays of spheres and polyhedrons. It is shown that SCM estimates provide good analytical approximations of the effective parameters for periodic packings of spheres at porosities larger than 0.6, while the agreement is excellent for periodic packings of polyhedrons in the whole range of porosity.

  18. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  19. Global Ocean Vertical Velocity From a Dynamically Consistent Ocean State Estimate

    NASA Astrophysics Data System (ADS)

    Liang, Xinfeng; Spall, Michael; Wunsch, Carl

    2017-10-01

    Estimates of the global ocean vertical velocities (Eulerian, eddy-induced, and residual) from a dynamically consistent and data-constrained ocean state estimate are presented and analyzed. Conventional patterns of vertical velocity, Ekman pumping, appear in the upper ocean, with topographic dominance at depth. Intense and vertically coherent upwelling and downwelling occur in the Southern Ocean, which are likely due to the interaction of the Antarctic Circumpolar Current and large-scale topographic features and are generally canceled out in the conventional zonally averaged results. These "elevators" at high latitudes connect the upper to the deep and abyssal oceans and working together with isopycnal mixing are likely a mechanism, in addition to the formation of deep and abyssal waters, for fast responses of the deep and abyssal oceans to the changing climate. Also, Eulerian and parameterized eddy-induced components are of opposite signs in numerous regions around the global ocean, particularly in the ocean interior away from surface and bottom. Nevertheless, residual vertical velocity is primarily determined by the Eulerian component, and related to winds and large-scale topographic features. The current estimates of vertical velocities can serve as a useful reference for investigating the vertical exchange of ocean properties and tracers, and its complex spatial structure ultimately permits regional tests of basic oceanographic concepts such as Sverdrup balance and coastal upwelling/downwelling.

  20. Consistent Parameter and Transfer Function Estimation using Context Free Grammars

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a

  1. Sonographic estimation of fetal weight: comparison of bias, precision and consistency using 12 different formulae.

    PubMed

    Anderson, N G; Jolley, I J; Wells, J E

    2007-08-01

    To determine the major sources of error in ultrasonographic assessment of fetal weight and whether they have changed over the last decade. We performed a prospective observational study in 1991 and again in 2000 of a mixed-risk pregnancy population, estimating fetal weight within 7 days of delivery. In 1991, the Rose and McCallum formula was used for 72 deliveries. Inter- and intraobserver agreement was assessed within this group. Bland-Altman measures of agreement from log data were calculated as ratios. We repeated the study in 2000 in 208 consecutive deliveries, comparing predicted and actual weights for 12 published equations using Bland-Altman and percentage error methods. We compared bias (mean percentage error), precision (SD percentage error), and their consistency across the weight ranges. 95% limits of agreement ranged from - 4.4% to + 3.3% for inter- and intraobserver estimates, but were - 18.0% to 24.0% for estimated and actual birth weight. There was no improvement in accuracy between 1991 and 2000. In 2000 only six of the 12 published formulae had overall bias within 7% and precision within 15%. There was greater bias and poorer precision in nearly all equations if the birth weight was < 1,000 g. Observer error is a relatively minor component of the error in estimating fetal weight; error due to the equation is a larger source of error. Improvements in ultrasound technology have not improved the accuracy of estimating fetal weight. Comparison of methods of estimating fetal weight requires statistical methods that can separate out bias, precision and consistency. Estimating fetal weight in the very low birth weight infant is subject to much greater error than it is in larger babies. Copyright (c) 2007 ISUOG. Published by John Wiley & Sons, Ltd.

  2. Consistent latent position estimation and vertex classification for random dot product graphs.

    PubMed

    Sussman, Daniel L; Tang, Minh; Priebe, Carey E

    2014-01-01

    In this work, we show that using the eigen-decomposition of the adjacency matrix, we can consistently estimate latent positions for random dot product graphs provided the latent positions are i.i.d. from some distribution. If class labels are observed for a number of vertices tending to infinity, then we show that the remaining vertices can be classified with error converging to Bayes optimal using the $(k)$-nearest-neighbors classification rule. We evaluate the proposed methods on simulated data and a graph derived from Wikipedia.

  3. Obtaining Cue Rate Estimates for Some Mysticete Species using Existing Data

    DTIC Science & Technology

    2014-09-30

    primary focus is to obtain cue rates for humpback whales (Megaptera novaeangliae) off the California coast and on the PMRF range. To our knowledge, no... humpback whale cue rates have been calculated for these populations. Once a cue rate is estimated for the populations of humpback whales off the...rates for humpback whales on breeding grounds, in addition to average cue rates for other species of mysticete whales . Cue rates of several other

  4. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  5. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  6. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    PubMed

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  7. Use of NMR logging to obtain estimates of hydraulic conductivity in the High Plains aquifer, Nebraska, USA

    USGS Publications Warehouse

    Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John

    2013-01-01

    Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.

  8. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David B.

    2012-01-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

  9. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  10. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  11. Short communication: Genetic variation in choice consistency for cows accessing automatic milking units.

    PubMed

    Løvendahl, Peter; Sørensen, Lars Peter; Bjerring, Martin; Lassen, Jan

    2016-12-01

    Dairy cows milked in automatic milking systems (AMS) with more than 1 milking box may, as individuals, have a preference for specific milking boxes if allowed free choice. Estimates of quantitative genetic variation in behavioral traits of farmed animals have previously been reported, with estimates of heritability ranging widely. However, for the consistency of choice in dairy cows, almost no published estimates of heritability exist. The hypothesis for this study was that choice consistency is partly under additive genetic control and partly controlled by permanent environmental (animal) effects. The aims of this study were to obtain estimates of genetic and phenotypic parameters for choice consistency in dairy cows milked in AMS herds. Data were obtained from 5 commercial Danish herds (I-V) with 2 AMS milking boxes (A, B). Milking data were only from milkings where both the present and the previous milkings were coded as completed. This filter was used to fulfill a criterion of free-choice situation (713,772 milkings, 1,231 cows). The lactation was divided into 20 segments covering 15d each, from 5 to 305d in milk. Choice consistency scores were obtained as the fraction of milkings without change of box [i.e., 1.0 - µ(box change)] for each segment. Data were analyzed for one part of lactation at a time using a linear mixed model for first-parity cows alone and for all parities jointly. Choice consistency was found to be only weakly heritable (heritability=0.02 to 0.14) in first as well as in later parities, and having intermediate repeatability (repeatability coefficients=0.27 to 0.56). Heritability was especially low at early and late lactation states. These results indicate that consistency, which is itself an indication of repeated similar choices, is also repeatable as a trait observed over longer time periods. However, the genetic background seems to play a smaller role compared with that of the permanent animal effects, indicating that consistency could

  12. On Consistency Test Method of Expert Opinion in Ecological Security Assessment

    PubMed Central

    Wang, Lihong

    2017-01-01

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert’s individual judgment level, ability and the consistency of the expert’s overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment. PMID:28869570

  13. On Consistency Test Method of Expert Opinion in Ecological Security Assessment.

    PubMed

    Gong, Zaiwu; Wang, Lihong

    2017-09-04

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.

  14. Modular neuron-based body estimation: maintaining consistency over different limbs, modalities, and frames of reference

    PubMed Central

    Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.

    2013-01-01

    This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151

  15. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  16. Decoding tactile afferent activity to obtain an estimate of instantaneous force and torque applied to the fingerpad

    PubMed Central

    Birznieks, Ingvars; Redmond, Stephen J.

    2015-01-01

    Dexterous manipulation is not possible without sensory information about object properties and manipulative forces. Fundamental neuroscience has been unable to demonstrate how information about multiple stimulus parameters may be continuously extracted, concurrently, from a population of tactile afferents. This is the first study to demonstrate this, using spike trains recorded from tactile afferents innervating the monkey fingerpad. A multiple-regression model, requiring no a priori knowledge of stimulus-onset times or stimulus combination, was developed to obtain continuous estimates of instantaneous force and torque. The stimuli consisted of a normal-force ramp (to a plateau of 1.8, 2.2, or 2.5 N), on top of which −3.5, −2.0, 0, +2.0, or +3.5 mNm torque was applied about the normal to the skin surface. The model inputs were sliding windows of binned spike counts recorded from each afferent. Models were trained and tested by 15-fold cross-validation to estimate instantaneous normal force and torque over the entire stimulation period. With the use of the spike trains from 58 slow-adapting type I and 25 fast-adapting type I afferents, the instantaneous normal force and torque could be estimated with small error. This study demonstrated that instantaneous force and torque parameters could be reliably extracted from a small number of tactile afferent responses in a real-time fashion with stimulus combinations that the model had not been exposed to during training. Analysis of the model weights may reveal how interactions between stimulus parameters could be disentangled for complex population responses and could be used to test neurophysiologically relevant hypotheses about encoding mechanisms. PMID:25948866

  17. Martial arts striking hand peak acceleration, accuracy and consistency.

    PubMed

    Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A

    2013-01-01

    The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.

  18. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  19. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  20. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  1. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Treesearch

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  2. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  3. Effect of windowing on lithosphere elastic thickness estimates obtained via the coherence method: Results from northern South America

    NASA Astrophysics Data System (ADS)

    Ojeda, GermáN. Y.; Whitman, Dean

    2002-11-01

    The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.

  4. Multiscale analysis of potential fields by a ridge consistency criterion: the reconstruction of the Bishop basement

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Cascone, L.

    2012-01-01

    We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.

  5. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras.

    PubMed

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.

  6. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  7. Consistency argued students of fluid

    NASA Astrophysics Data System (ADS)

    Viyanti; Cari; Suparmi; Winarti; Slamet Budiarti, Indah; Handika, Jeffry; Widyastuti, Fatma

    2017-01-01

    Problem solving for physics concepts through consistency arguments can improve thinking skills of students and it is an important thing in science. The study aims to assess the consistency of the material Fluid student argmentation. The population of this study are College students PGRI Madiun, UIN Sunan Kalijaga Yogyakarta and Lampung University. Samples using cluster random sampling, 145 samples obtained by the number of students. The study used a descriptive survey method. Data obtained through multiple-choice test and interview reasoned. Problem fluid modified from [9] and [1]. The results of the study gained an average consistency argmentation for the right consistency, consistency is wrong, and inconsistent respectively 4.85%; 29.93%; and 65.23%. Data from the study have an impact on the lack of understanding of the fluid material which is ideally in full consistency argued affect the expansion of understanding of the concept. The results of the study as a reference in making improvements in future studies is to obtain a positive change in the consistency of argumentations.

  8. Challenges in Obtaining Estimates of the Risk of Tuberculosis Infection During Overseas Deployment.

    PubMed

    Mancuso, James D; Geurts, Mia

    2015-12-01

    Estimates of the risk of tuberculosis (TB) infection resulting from overseas deployment among U.S. military service members have varied widely, and have been plagued by methodological problems. The purpose of this study was to estimate the incidence of TB infection in the U.S. military resulting from deployment. Three populations were examined: 1) a unit of 2,228 soldiers redeploying from Iraq in 2008, 2) a cohort of 1,978 soldiers followed up over 5 years after basic training at Fort Jackson in 2009, and 3) 6,062 participants in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The risk of TB infection in the deployed population was low-0.6% (95% confidence interval [CI]: 0.1-2.3%)-and was similar to the non-deployed population. The prevalence of latent TB infection (LTBI) in the U.S. population was not significantly different among deployed and non-deployed veterans and those with no military service. The limitations of these retrospective studies highlight the challenge in obtaining valid estimates of risk using retrospective data and the need for a more definitive study. Similar to civilian long-term travelers, risks for TB infection during deployment are focal in nature, and testing should be targeted to only those at increased risk. © The American Society of Tropical Medicine and Hygiene.

  9. Plasma Diffusion in Self-Consistent Fluctuations

    NASA Technical Reports Server (NTRS)

    Smets, R.; Belmont, G.; Aunai, N.

    2012-01-01

    The problem of particle diffusion in position space, as a consequence ofeleclromagnetic fluctuations is addressed. Numerical results obtained with a self-consistent hybrid code are presented, and a method to calculate diffusion coefficient in the direction perpendicular to the mean magnetic field is proposed. The diffusion is estimated for two different types of fluctuations. The first type (resuiting from an agyrotropic in itiai setting)is stationary, wide band white noise, and associated to Gaussian probability distribution function for the magnetic fluctuations. The second type (result ing from a Kelvin-Helmholtz instability) is non-stationary, with a power-law spectrum, and a non-Gaussian probabi lity distribution function. The results of the study allow revisiting the question of loading particles of solar wind origin in the Earth magnetosphere.

  10. Atmospheric Turbulence Estimates from a Pulsed Lidar

    NASA Technical Reports Server (NTRS)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  11. The first step toward genetic selection for host tolerance to infectious pathogens: obtaining the tolerance phenotype through group estimates

    PubMed Central

    Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias

    2012-01-01

    Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID

  12. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  13. Estimates of the solar internal angular velocity obtained with the Mt. Wilson 60-foot solar tower

    NASA Technical Reports Server (NTRS)

    Rhodes, Edward J., Jr.; Cacciani, Alessandro; Woodard, Martin; Tomczyk, Steven; Korzennik, Sylvain

    1987-01-01

    Estimates are obtained of the solar internal angular velocity from measurements of the frequency splittings of p-mode oscillations. A 16-day time series of full-disk Dopplergrams obtained during July and August 1984 at the 60-foot tower telescope of the Mt. Wilson Observatory is analyzed. Power spectra were computed for all of the zonal, tesseral, and sectoral p-modes from l = 0 to 89 and for all of the sectoral p-modes from l = 90 to 200. A mean power spectrum was calculated for each degree up to 89. The frequency differences of all of the different nonzonal modes were calculated for these mean power spectra.

  14. Time-of-flight PET time calibration using data consistency

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2018-05-01

    This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.

  15. The Principle of Energetic Consistency

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of

  16. Efficient bootstrap estimates for tail statistics

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Aarnes, Ole Johan

    2017-03-01

    Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.

  17. Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua

    2012-01-01

    This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…

  18. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume

  19. Classification with asymmetric label noise: Consistency and maximal denoising

    DOE PAGES

    Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...

    2016-09-20

    In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less

  20. Classification with asymmetric label noise: Consistency and maximal denoising

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, Gilles; Flaska, Marek; Handy, Gregory

    In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less

  1. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  2. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  3. Obtaining Parts

    Science.gov Websites

    The Cosmic Connection Parts for the Berkeley Detector Suppliers: Scintillator Eljen Technology 1 obtain the components needed to build the Berkeley Detector. These companies have helped previous the last update. He estimates that the cost to build a detector varies from $1500 to $2700 depending

  4. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    NASA Astrophysics Data System (ADS)

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-08-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  5. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    USGS Publications Warehouse

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-01-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  6. Fourier rebinning and consistency equations for time-of-flight PET planograms

    PubMed Central

    Li, Yusheng; Defrise, Michel; Matej, Samuel; Metzler, Scott D

    2016-01-01

    Due to the unique geometry, dual-panel PET scanners have many advantages in dedicated breast imaging and on-board imaging applications since the compact scanners can be combined with other imaging and treatment modalities. The major challenges of dual-panel PET imaging are the limited-angle problem and data truncation, which can cause artifacts due to incomplete data sampling. The time-of-flight (TOF) information can be a promising solution to reduce these artifacts. The TOF planogram is the native data format for dual-panel TOF PET scanners, and the non-TOF planogram is the 3D extension of linogram. The TOF planograms is five-dimensional while the objects are three-dimensional, and there are two degrees of redundancy. In this paper, we derive consistency equations and Fourier-based rebinning algorithms to provide a complete understanding of the rich structure of the fully 3D TOF planograms. We first derive two consistency equations and John's equation for 3D TOF planograms. By taking the Fourier transforms, we obtain two Fourier consistency equations and the Fourier-John equation, which are the duals of the consistency equations and John's equation, respectively. We then solve the Fourier consistency equations and Fourier-John equation using the method of characteristics. The two degrees of entangled redundancy of the 3D TOF data can be explicitly elicited and exploited by the solutions along the characteristic curves. As the special cases of the general solutions, we obtain Fourier rebinning and consistency equations (FORCEs), and thus we obtain a complete scheme to convert among different types of PET planograms: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF planograms. The FORCEs can be used as Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. As a byproduct, we show the two consistency equations are necessary and sufficient for 3D TOF planograms

  7. Fourier rebinning and consistency equations for time-of-flight PET planograms.

    PubMed

    Li, Yusheng; Defrise, Michel; Matej, Samuel; Metzler, Scott D

    2016-01-01

    Due to the unique geometry, dual-panel PET scanners have many advantages in dedicated breast imaging and on-board imaging applications since the compact scanners can be combined with other imaging and treatment modalities. The major challenges of dual-panel PET imaging are the limited-angle problem and data truncation, which can cause artifacts due to incomplete data sampling. The time-of-flight (TOF) information can be a promising solution to reduce these artifacts. The TOF planogram is the native data format for dual-panel TOF PET scanners, and the non-TOF planogram is the 3D extension of linogram. The TOF planograms is five-dimensional while the objects are three-dimensional, and there are two degrees of redundancy. In this paper, we derive consistency equations and Fourier-based rebinning algorithms to provide a complete understanding of the rich structure of the fully 3D TOF planograms. We first derive two consistency equations and John's equation for 3D TOF planograms. By taking the Fourier transforms, we obtain two Fourier consistency equations and the Fourier-John equation, which are the duals of the consistency equations and John's equation, respectively. We then solve the Fourier consistency equations and Fourier-John equation using the method of characteristics. The two degrees of entangled redundancy of the 3D TOF data can be explicitly elicited and exploited by the solutions along the characteristic curves. As the special cases of the general solutions, we obtain Fourier rebinning and consistency equations (FORCEs), and thus we obtain a complete scheme to convert among different types of PET planograms: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF planograms. The FORCEs can be used as Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. As a byproduct, we show the two consistency equations are necessary and sufficient for 3D TOF planograms

  8. Consistency of Estimated Global Water Cycle Variations Over the Satellite Era

    NASA Technical Reports Server (NTRS)

    Robertson, F. R.; Bosilovich, M. G.; Roberts, J. B.; Reichle, R. H.; Adler, R.; Ricciardulli, L.; Berg, W.; Huffman, G. J.

    2013-01-01

    Motivated by the question of whether recent indications of decadal climate variability and a possible "climate shift" may have affected the global water balance, we examine evaporation minus precipitation (E-P) variability integrated over the global oceans and global land from three points of view-remotely sensed retrievals / objective analyses over the oceans, reanalysis vertically-integrated moisture convergence (MFC) over land, and land surface models forced with observations-based precipitation, radiation and near-surface meteorology. Because monthly variations in area-averaged atmospheric moisture storage are small and the global integral of moisture convergence must approach zero, area-integrated E-P over ocean should essentially equal precipitation minus evapotranspiration (P-ET) over land (after adjusting for ocean and land areas). Our analysis reveals considerable uncertainty in the decadal variations of ocean evaporation when integrated to global scales. This is due to differences among datasets in 10m wind speed and near-surface atmospheric specific humidity (2m qa) used in bulk aerodynamic retrievals. Precipitation variations, all relying substantially on passive microwave retrievals over ocean, still have uncertainties in decadal variability, but not to the degree present with ocean evaporation estimates. Reanalysis MFC and P-ET over land from several observationally forced diagnostic and land surface models agree best on interannual variations. However, upward MFC (i.e. P-ET) reanalysis trends are likely related in part to observing system changes affecting atmospheric assimilation models. While some evidence for a low-frequency E-P maximum near 2000 is found, consistent with a recent apparent pause in sea-surface temperature (SST) rise, uncertainties in the datasets used here remain significant. Prospects for further reducing uncertainties are discussed. The results are interpreted in the context of recent climate variability (Pacific Decadal

  9. Thermodynamically self-consistent theory for the Blume-Capel model.

    PubMed

    Grollau, S; Kierlik, E; Rosinberg, M L; Tarjus, G

    2001-04-01

    We use a self-consistent Ornstein-Zernike approximation to study the Blume-Capel ferromagnet on three-dimensional lattices. The correlation functions and the thermodynamics are obtained from the solution of two coupled partial differential equations. The theory provides a comprehensive and accurate description of the phase diagram in all regions, including the wing boundaries in a nonzero magnetic field. In particular, the coordinates of the tricritical point are in very good agreement with the best estimates from simulation or series expansion. Numerical and analytical analysis strongly suggest that the theory predicts a universal Ising-like critical behavior along the lambda line and the wing critical lines, and a tricritical behavior governed by mean-field exponents.

  10. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    PubMed

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  11. Epipolar Consistency in Transmission Imaging.

    PubMed

    Aichert, André; Berger, Martin; Wang, Jian; Maass, Nicole; Doerfler, Arnd; Hornegger, Joachim; Maier, Andreas K

    2015-11-01

    This paper presents the derivation of the Epipolar Consistency Conditions (ECC) between two X-ray images from the Beer-Lambert law of X-ray attenuation and the Epipolar Geometry of two pinhole cameras, using Grangeat's theorem. We motivate the use of Oriented Projective Geometry to express redundant line integrals in projection images and define a consistency metric, which can be used, for instance, to estimate patient motion directly from a set of X-ray images. We describe in detail the mathematical tools to implement an algorithm to compute the Epipolar Consistency Metric and investigate its properties with detailed random studies on both artificial and real FD-CT data. A set of six reference projections of the CT scan of a fish were used to evaluate accuracy and precision of compensating for random disturbances of the ground truth projection matrix using an optimization of the consistency metric. In addition, we use three X-ray images of a pumpkin to prove applicability to real data. We conclude, that the metric might have potential in applications related to the estimation of projection geometry. By expression of redundancy between two arbitrary projection views, we in fact support any device or acquisition trajectory which uses a cone-beam geometry. We discuss certain geometric situations, where the ECC provide the ability to correct 3D motion, without the need for 3D reconstruction.

  12. (In)Consistent estimates of changes in relative precipitation in an European domain over the last 350 years

    NASA Astrophysics Data System (ADS)

    Bothe, Oliver; Wagner, Sebastian; Zorita, Eduardo

    2015-04-01

    How did regional precipitation change in past centuries? We have potentially three sources of information to answer this question: There are, especially for Europe, a number of long records of local station precipitation; documentary records and natural archives of past environmental variability serve as proxy records for empirical reconstructions; in addition, simulations with coupled climate models or Earth System Models provide estimates on the spatial structure of precipitation variability. However, instrumental records rarely extend back to the 18th century, reconstructions include large uncertainties, and simulation skill is often still unsatisfactory for precipitation. Thus, we can only seek to answer to which extent the three sources provide a consistent picture of past regional precipitation changes. This presentation describes the (lack of) consistency in describing changes of the distributional properties of seasonal precipitation between the different data sources. We concentrate on England and Wales since there are two recent reconstructions and a long observation based record available for this domain. The season of interest is an extended spring (March, April, May, June, July, MAMJJ) over the past 350 years. The main simulated data stem from a regional simulation for the European domain with CCLM driven at its lateral boundaries with conditions provided by a MPI-ESM COSMOS simulation for the last millennium using a high-amplitude solar forcing. A number of simulations for the past 1000 years from the Paleoclimate Modelling Intercomparison Project Phase III provide additional information. We fit a Weibull distribution to the available data sets following the approach for calculating standardized precipitation indices. We do so over 51 year moving windows to assess the consistency of changes in the distributional properties. Changes in the percentiles for severe (and extreme) dry or wet conditions and in the Weibull standard deviations of precipitation

  13. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    PubMed

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  14. Estimation and model selection of semiparametric multivariate survival functions under general censorship

    PubMed Central

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2013-01-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286

  15. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Illicit and pharmaceutical drug consumption estimated via wastewater analysis. Part A: chemical analysis and drug use estimates.

    PubMed

    Baker, David R; Barron, Leon; Kasprzyk-Hordern, Barbara

    2014-07-15

    This paper presents, for the first time, community-wide estimation of drug and pharmaceuticals consumption in England using wastewater analysis and a large number of compounds. Among groups of compounds studied were: stimulants, hallucinogens and their metabolites, opioids, morphine derivatives, benzodiazepines, antidepressants and others. Obtained results showed the usefulness of wastewater analysis in order to provide estimates of local community drug consumption. It is noticeable that where target compounds could be compared to NHS prescription statistics, good comparisons were apparent between the two sets of data. These compounds include oxycodone, dihydrocodeine, methadone, tramadol, temazepam and diazepam. Whereas, discrepancies were observed for propoxyphene, codeine, dosulepin and venlafaxine (over-estimations in each case except codeine). Potential reasons for discrepancies include: sales of drugs sold without prescription and not included within NHS data, abuse of a drug with the compound trafficked through illegal sources, different consumption patterns in different areas, direct disposal leading to over estimations when using parent compound as the drug target residue and excretion factors not being representative of the local community. It is noticeable that using a metabolite (and not a parent drug) as a biomarker leads to higher certainty of obtained estimates. With regard to illicit drugs, consistent and logical results were reported. Monitoring of these compounds over a one week period highlighted the expected recreational use of many of these drugs (e.g. cocaine and MDMA) and the more consistent use of others (e.g. methadone). Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  18. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  19. Using Internet search engines to estimate word frequency.

    PubMed

    Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E

    2002-05-01

    The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.

  20. Consistent Partial Least Squares Path Modeling via Regularization

    PubMed Central

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491

  1. Consistent Partial Least Squares Path Modeling via Regularization.

    PubMed

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  2. Greenhouse gases inventory and carbon balance of two dairy systems obtained from two methane-estimation methods.

    PubMed

    Cunha, C S; Lopes, N L; Veloso, C M; Jacovine, L A G; Tomich, T R; Pereira, L G R; Marcondes, M I

    2016-11-15

    The adoption of carbon inventories for dairy farms in tropical countries based on models developed from animals and diets of temperate climates is questionable. Thus, the objectives of this study were to estimate enteric methane (CH4) emissions through the SF6 tracer gas technique and through equations proposed by the Intergovernmental Panel on Climate Change (IPCC) Tier 2 and to calculate the inventory of greenhouse gas (GHG) emissions from two dairy systems. In addition, the carbon balance of these properties was estimated using enteric CH4 emissions obtained using both methodologies. In trial 1, the CH4 emissions were estimated from seven Holstein dairy cattle categories based on the SF6 tracer gas technique and on IPCC equations. The categories used in the study were prepubertal heifers (n=6); pubertal heifers (n=4); pregnant heifers (n=5); high-producing (n=6); medium-producing (n=5); low-producing (n=4) and dry cows (n=5). Enteric methane emission was higher for the category comprising prepubertal heifers when estimated by the equations proposed by the IPCC Tier 2. However, higher CH4 emissions were estimated by the SF6 technique in the categories including medium- and high-producing cows and dry cows. Pubertal heifers, pregnant heifers, and low-producing cows had equal CH4 emissions as estimated by both methods. In trial 2, two dairy farms were monitored for one year to identify all activities that contributed in any way to GHG emissions. The total emission from Farm 1 was 3.21t CO2e/animal/yr, of which 1.63t corresponded to enteric CH4. Farm 2 emitted 3.18t CO2e/animal/yr, with 1.70t of enteric CH4. IPCC estimations can underestimate CH4 emissions from some categories while overestimate others. However, considering the whole property, these discrepancies are offset and we would submit that the equations suggested by the IPCC properly estimate the total CH4 emission and carbon balance of the properties. Thus, the IPCC equations should be utilized with

  3. Risk estimation using probability machines

    PubMed Central

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  4. Risk estimation using probability machines.

    PubMed

    Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D

    2014-03-01

    Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.

  5. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can

  6. Bayes estimation: A novel approach to derivation of internally consistent thermodynamic data for minerals, their uncertainties, and correlations. Part II: Application

    NASA Astrophysics Data System (ADS)

    Chatterjee, Niranjan D.; Miller, Klaus; Olbricht, Walter

    1994-05-01

    Internally consistent thermodynamic data, including their uncertainties and correlations, are reported for 22 phases of the quaternary system CaO-Al2O3-SiO2-H2O. These data have been derived by simultaneous evaluation of the appropriate phase properties (PP) and reaction properties (RP) by the novel technique of Bayes estimation (BE). The thermodynamic model used and the theory of BE was expounded in Part I of this paper. Part II is the follow-up study illustrating an application of BE. The input for BE comprised, among others, the a priori values for standard enthalpy of formation of the i-th phase, Δf H {/i 0}, and its standard entropy, S {/i 0}, in addition to the reaction reversal constraints for 33 equilibria involving the relevant phases. A total of 269 RP restrictions have been processed, of which 107 turned out to be non-redundant. The refined values for Δf H {/i 0}and S {/i 0}obtained by BE, including their 2σ-uncertainties, appear in Table 4; the Appendix reproduces the corresponding correlation matrix. These data permit generation of computed phase diagrams with 2σ-uncertainty envelopes based on conventional error propagation; Fig. 3 depicts such a phase diagram for the system CaO-Al2O3-SiO2. It shows that the refined dataset is capable of yielding phase diagrams with uncertainty envelopes narrow enough to be geologically useful. The results in Table 4 demonstrate that the uncertainties of the prior values for Δf H {/i Emphasis>0}, given in Table 1, have decreased by up to an order of magnitude, while those for S {/i 0}improved by a factor of up to two. For comparison, Table 4 also lists the refined Δf H {/i 0}and S {/i 0}data obtained by mathematical programming (MAP), minimizing a quadratic objective function used earlier by Berman (1988). Examples of calculated phase diagrams are given to demonstrate the advantages of BE for deriving internally consistent thermodynamic data. Although P-T curves generated from both MAP and BE databases will pass

  7. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  8. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  9. Comparison of Species Richness Estimates Obtained Using Nearly Complete Fragments and Simulated Pyrosequencing-Generated Fragments in 16S rRNA Gene-Based Environmental Surveys▿ †

    PubMed Central

    Youssef, Noha; Sheik, Cody S.; Krumholz, Lee R.; Najar, Fares Z.; Roe, Bruce A.; Elshahed, Mostafa S.

    2009-01-01

    Pyrosequencing-based 16S rRNA gene surveys are increasingly utilized to study highly diverse bacterial communities, with special emphasis on utilizing the large number of sequences obtained (tens to hundreds of thousands) for species richness estimation. However, it is not yet clear how the number of operational taxonomic units (OTUs) and, hence, species richness estimates determined using shorter fragments at different taxonomic cutoffs correlates with the number of OTUs assigned using longer, nearly complete 16S rRNA gene fragments. We constructed a 16S rRNA clone library from an undisturbed tallgrass prairie soil (1,132 clones) and used it to compare species richness estimates obtained using eight pyrosequencing candidate fragments (99 to 361 bp in length) and the nearly full-length fragment. Fragments encompassing the V1 and V2 (V1+V2) region and the V6 region (generated using primer pairs 8F-338R and 967F-1046R) overestimated species richness; fragments encompassing the V3, V7, and V7+V8 hypervariable regions (generated using primer pairs 338F-530R, 1046F-1220R, and 1046F-1392R) underestimated species richness; and fragments encompassing the V4, V5+V6, and V6+V7 regions (generated using primer pairs 530F-805R, 805F-1046R, and 967F-1220R) provided estimates comparable to those obtained with the nearly full-length fragment. These patterns were observed regardless of the alignment method utilized or the parameter used to gauge comparative levels of species richness (number of OTUs observed, slope of scatter plots of pairwise distance values for short and nearly complete fragments, and nonparametric and parametric species richness estimates). Similar results were obtained when analyzing three other datasets derived from soil, adult Zebrafish gut, and basaltic formations in the East Pacific Rise. Regression analysis indicated that these observed discrepancies in species richness estimates within various regions could readily be explained by the proportions of

  10. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  11. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    PubMed

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Intrajudge Consistency Using the Angoff Standard-Setting Method.

    ERIC Educational Resources Information Center

    Plake, Barbara S.; Impara, James C.

    This study investigated the intrajudge consistency of Angoff-based item performance estimates. The examination used was a certification examination in an emergency medicine specialty. Ten expert panelists rated the same 24 items twice during an operational standard setting study. Results indicate that the panelists were highly consistent, in terms…

  13. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  14. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  15. Validity test and its consistency in the construction of patient loyalty model

    NASA Astrophysics Data System (ADS)

    Yanuar, Ferra

    2016-04-01

    The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.

  16. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  17. Estimation of two ordered mean residual lifetime functions.

    PubMed

    Ebrahimi, N

    1993-06-01

    In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.

  18. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  19. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  20. Evidence for consistency of the glycation gap in diabetes.

    PubMed

    Nayak, Ananth U; Holland, Martin R; Macdonald, David R; Nevill, Alan; Singh, Baldev M

    2011-08-01

    Discordance between HbA(1c) and fructosamine estimations in the assessment of glycemia is often encountered. A number of mechanisms might explain such discordance, but whether it is consistent is uncertain. This study aims to coanalyze paired glycosylated hemoglobin (HbA(1c))-fructosamine estimations by using fructosamine to determine a predicted HbA(1c), to calculate a glycation gap (G-gap) and to determine whether the G-gap is consistent over time. We included 2,263 individuals with diabetes who had at least two paired HbA(1c)-fructosamine estimations that were separated by 10 ± 8 months. Of these, 1,217 individuals had a third pair. The G-gap was calculated as G-gap = HbA(1c) minus the standardized fructosamine-derived HbA(1c) equivalent (FHbA(1c)). The hypothesis that the G-gap would remain consistent in individuals over time was tested. The G-gaps were similar in the first, second, and third paired samples (0.0 ± 1.2, 0.0 ± 1.3, and 0.0 ± 1.3, respectively). Despite significant changes in the HbA(1c) and fructosamine, the G-gap did not differ in absolute or relative terms and showed no significant within-subject variability. The direction of the G-gap remained consistent. The G-gap appears consistent over time; thus, by inference any key underlying mechanisms are likely to be consistent. G-gap calculation may be a method of exploring and evaluating any such underlying mechanisms.

  1. Nonparametric functional data estimation applied to ozone data: prediction and extreme value analysis.

    PubMed

    Quintela-del-Río, Alejandro; Francisco-Fernández, Mario

    2011-02-01

    The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Marcus, S. I.

    1975-01-01

    The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.

  3. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  4. Image velocimetry for clouds with relaxation labeling based on deformation consistency

    NASA Astrophysics Data System (ADS)

    Horinouchi, Takeshi; Murakami, Shin-ya; Kouyama, Toru; Ogohara, Kazunori; Yamazaki, Atsushi; Yamada, Manabu; Watanabe, Shigeto

    2017-08-01

    Correlation-based cloud tracking has been extensively used to measure atmospheric winds, but still difficulty remains. In this study, aiming at developing a cloud tracking system for Akatsuki, an artificial satellite orbiting Venus, a formulation is developed for improving the relaxation labeling technique to select appropriate peaks of cross-correlation surfaces which tend to have multiple peaks. The formulation makes an explicit use of consistency inherent in the type of cross-correlation method where template sub-images are slid without deformation; if the resultant motion vectors indicate a too-large deformation, it is contradictory to the assumption of the method. The deformation consistency is exploited further to develop two post processes; one clusters the motion vectors into groups within each of which the consistency is perfect, and the other extends the groups using the original candidate lists. These processes are useful to eliminate erroneous vectors, distinguish motion vectors at different altitudes, and detect phase velocities of waves in fluids such as atmospheric gravity waves. As a basis of the relaxation labeling and the post processes as well as uncertainty estimation, the necessity to find isolated (well-separated) peaks of cross-correlation surfaces is argued, and an algorithm to realize it is presented. All the methods are implemented, and their effectiveness is demonstrated with initial images obtained by the ultraviolet imager onboard Akatsuki. Since the deformation consistency regards the logical consistency inherent in template matching methods, it should have broad application beyond cloud tracking.

  5. Estimating Evaporative Fraction From Readily Obtainable Variables in Mangrove Forests of the Everglades, U.S.A.

    NASA Technical Reports Server (NTRS)

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John; Barr, Jordan

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) the ratio of latent heat (LE; energy equivalent of evapotranspiration -ET-) to total available energy from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micro-meteorological and flux tower observations, or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature [T(sub s)] normalized difference vegetation index (NDVI)and daily maximum air temperature [T(sub a)]. The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using T(sub s) and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the T(sub s) from Landsat relative to the T(sub s) from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  6. Estimating evaporative fraction from readily obtainable variables in mangrove forests of the Everglades, U.S.A.

    USGS Publications Warehouse

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John W.; Barr, Jordan G.

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) – the ratio of latent heat (LE; energy equivalent of evapotranspiration –ET–) to total available energy – from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micrometeorological and flux tower observations or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature (Ts) normalized difference vegetation index (NDVI) and daily maximum air temperature (Ta). The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using Ts and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the Ts from Landsat relative to the Ts from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  7. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  8. Motion compensation for cone-beam CT using Fourier consistency conditions

    NASA Astrophysics Data System (ADS)

    Berger, M.; Xia, Y.; Aichinger, W.; Mentl, K.; Unberath, M.; Aichert, A.; Riess, C.; Hornegger, J.; Fahrig, R.; Maier, A.

    2017-09-01

    In cone-beam CT, involuntary patient motion and inaccurate or irreproducible scanner motion substantially degrades image quality. To avoid artifacts this motion needs to be estimated and compensated during image reconstruction. In previous work we showed that Fourier consistency conditions (FCC) can be used in fan-beam CT to estimate motion in the sinogram domain. This work extends the FCC to 3\\text{D} cone-beam CT. We derive an efficient cost function to compensate for 3\\text{D} motion using 2\\text{D} detector translations. The extended FCC method have been tested with five translational motion patterns, using a challenging numerical phantom. We evaluated the root-mean-square-error and the structural-similarity-index between motion corrected and motion-free reconstructions. Additionally, we computed the mean-absolute-difference (MAD) between the estimated and the ground-truth motion. The practical applicability of the method is demonstrated by application to respiratory motion estimation in rotational angiography, but also to motion correction for weight-bearing imaging of knees. Where the latter makes use of a specifically modified FCC version which is robust to axial truncation. The results show a great reduction of motion artifacts. Accurate estimation results were achieved with a maximum MAD value of 708 μm and 1184 μm for motion along the vertical and horizontal detector direction, respectively. The image quality of reconstructions obtained with the proposed method is close to that of motion corrected reconstructions based on the ground-truth motion. Simulations using noise-free and noisy data demonstrate that FCC are robust to noise. Even high-frequency motion was accurately estimated leading to a considerable reduction of streaking artifacts. The method is purely image-based and therefore independent of any auxiliary data.

  9. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.

  10. An assessment of consistence of exhaust gas emission test results obtained under controlled NEDC conditions

    NASA Astrophysics Data System (ADS)

    Balawender, K.; Jaworski, A.; Kuszewski, H.; Lejda, K.; Ustrzycki, A.

    2016-09-01

    Measurements concerning emissions of pollutants contained in automobile combustion engine exhaust gases is of primary importance in view of their harmful impact on the natural environment. This paper presents results of tests aimed at determining exhaust gas pollutant emissions from a passenger car engine obtained under repeatable conditions on a chassis dynamometer. The test set-up was installed in a controlled climate chamber allowing to maintain the temperature conditions within the range from -20°C to +30°C. The analysis covered emissions of such components as CO, CO2, NOx, CH4, THC, and NMHC. The purpose of the study was to assess repeatability of results obtained in a number of tests performed as per NEDC test plan. The study is an introductory stage of a wider research project concerning the effect of climate conditions and fuel type on emission of pollutants contained in exhaust gases generated by automotive vehicles.

  11. Consistent Estimates of Very Low HIV Incidence Among People Who Inject Drugs: New York City, 2005–2014

    PubMed Central

    Arasteh, Kamyar; McKnight, Courtney; Feelemyer, Jonathan; Campbell, Aimée N. C.; Tross, Susan; Smith, Lou; Cooper, Hannah L. F.; Hagan, Holly; Perlman, David

    2016-01-01

    Objectives. To compare methods for estimating low HIV incidence among persons who inject drugs. Methods. We examined 4 methods in New York City, 2005 to 2014: (1) HIV seroconversions among repeat participants, (2) increase of HIV prevalence by additional years of injection among new injectors, (3) the New York State and Centers for Disease Control and Prevention stratified extrapolation algorithm, and (4) newly diagnosed HIV cases reported to the New York City Department of Health and Mental Hygiene. Results. The 4 estimates were consistent: (1) repeat participants: 0.37 per 100 person-years (PY; 95% confidence interval [CI] = 0.05/100 PY, 1.33/100 PY); (2) regression of prevalence by years injecting: 0.61 per 100 PY (95% CI = 0.36/100 PY, 0.87/100 PY); (3) stratified extrapolation algorithm: 0.32 per 100 PY (95% CI = 0.18/100 PY, 0.46/100 PY); and (4) newly diagnosed cases of HIV: 0.14 per 100 PY (95% CI = 0.11/100 PY, 0.16/100 PY). Conclusions. All methods appear to capture the same phenomenon of very low and decreasing HIV transmission among persons who inject drugs. Public Health Implications. If resources are available, the use of multiple methods would provide better information for public health purposes. PMID:26794160

  12. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    PubMed

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  13. The utility of online panel surveys versus computer-assisted interviews in obtaining substance-use prevalence estimates in the Netherlands.

    PubMed

    Spijkerman, Renske; Knibbe, Ronald; Knoops, Kim; Van De Mheen, Dike; Van Den Eijnden, Regina

    2009-10-01

    Rather than using the traditional, costly method of personal interviews in a general population sample, substance-use prevalence rates can be derived more conveniently from data collected among members of an online access panel. To examine the utility of this method, we compared the outcomes of an online survey with those obtained with the computer-assisted personal interviews (CAPI) method. Data were gathered from a large sample of online panellists and in a two-stage stratified sample of the Dutch population using the CAPI method. The Netherlands. Participants  The online sample comprised 57 125 Dutch online panellists (15-64 years) of Survey Sampling International LLC (SSI), and the CAPI cohort 7204 respondents (15-64 years). All participants answered identical questions about their use of alcohol, cannabis, ecstasy, cocaine and performance-enhancing drugs. The CAPI respondents were asked additionally about internet access and online panel membership. Both data sets were weighted statistically according to the distribution of demographic characteristics of the general Dutch population. Response rates were 35.5% (n = 20 282) for the online panel cohort and 62.7% (n = 4516) for the CAPI cohort. The data showed almost consistently lower substance-use prevalence rates for the CAPI respondents. Although the observed differences could be due to bias in both data sets, coverage and non-response bias were higher in the online panel survey. Despite its economic advantage, the online panel survey showed stronger non-response and coverage bias than the CAPI survey, leading to less reliable estimates of substance use in the general population. © 2009 The Authors. Journal compilation © 2009 Society for the Study of Addiction.

  14. Consistency of nuclear thermometric measurements at moderate excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, T. K.; Bhattacharya, C.; Kundu, S.

    2008-08-15

    A comparison of various thermometric techniques used for the estimation of nuclear temperature has been made from the decay of hot composite {sup 32}S* produced in the reaction {sup 20}Ne (145 MeV) + {sup 12}C. It is shown that the temperatures estimated by different techniques, known to vary significantly in the Fermi energy domain, are consistent with each other within experimental limits for the system studied here.

  15. The application of parameter estimation to flight measurements to obtain lateral-directional stability derivatives of an augmented jet-flap STOL airplane

    NASA Technical Reports Server (NTRS)

    Stephenson, J. D.

    1983-01-01

    Flight experiments with an augmented jet flap STOL aircraft provided data from which the lateral directional stability and control derivatives were calculated by applying a linear regression parameter estimation procedure. The tests, which were conducted with the jet flaps set at a 65 deg deflection, covered a large range of angles of attack and engine power settings. The effect of changing the angle of the jet thrust vector was also investigated. Test results are compared with stability derivatives that had been predicted. The roll damping derived from the tests was significantly larger than had been predicted, whereas the other derivatives were generally in agreement with the predictions. Results obtained using a maximum likelihood estimation procedure are compared with those from the linear regression solutions.

  16. Estimating phonation threshold pressure.

    PubMed

    Fisher, K V; Swank, P R

    1997-10-01

    Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.

  17. Average intragranular misorientation trends in polycrystalline materials predicted by a viscoplastic self-consistent approach

    DOE PAGES

    Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...

    2015-12-15

    Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less

  18. Personalized recommendation based on unbiased consistence

    NASA Astrophysics Data System (ADS)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  19. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    PubMed

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  20. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  1. Development of cost estimation tools for total occupational safety and health activities and occupational health services: cost estimation from a corporate perspective.

    PubMed

    Nagata, Tomohisa; Mori, Koji; Aratake, Yutaka; Ide, Hiroshi; Ishida, Hiromi; Nobori, Junichiro; Kojima, Reiko; Odagami, Kiminori; Kato, Anna; Tsutsumi, Akizumi; Matsuda, Shinya

    2014-01-01

    The aim of the present study was to develop standardized cost estimation tools that provide information to employers about occupational safety and health (OSH) activities for effective and efficient decision making in Japanese companies. We interviewed OSH staff members including full-time professional occupational physicians to list all OSH activities. Using activity-based costing, cost data were obtained from retrospective analyses of occupational safety and health costs over a 1-year period in three manufacturing workplaces and were obtained from retrospective analyses of occupational health services costs in four manufacturing workplaces. We verified the tools additionally in four workplaces including service businesses. We created the OSH and occupational health standardized cost estimation tools. OSH costs consisted of personnel costs, expenses, outsourcing costs and investments for 15 OSH activities. The tools provided accurate, relevant information on OSH activities and occupational health services. The standardized information obtained from our OSH and occupational health cost estimation tools can be used to manage OSH costs, make comparisons of OSH costs between companies and organizations and help occupational health physicians and employers to determine the best course of action.

  2. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically

  3. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  4. [Using neural networks based template matching method to obtain redshifts of normal galaxies].

    PubMed

    Xu, Xin; Luo, A-li; Wu, Fu-chao; Zhao, Yong-heng

    2005-06-01

    Galaxies can be divided into two classes: normal galaxy (NG) and active galaxy (AG). In order to determine NG redshifts, an automatic effective method is proposed in this paper, which consists of the following three main steps: (1) From the template of normal galaxy, the two sets of samples are simulated, one with the redshift of 0.0-0.3, the other of 0.3-0.5, then the PCA is used to extract the main components, and train samples are projected to the main component subspace to obtain characteristic spectra. (2) The characteristic spectra are used to train a Probabilistic Neural Network to obtain a Bayes classifier. (3) An unknown real NG spectrum is first inputted to this Bayes classifier to determine the possible range of redshift, then the template matching is invoked to locate the redshift value within the estimated range. Compared with the traditional template matching technique with an unconstrained range, our proposed method not only halves the computational load, but also increases the estimation accuracy. As a result, the proposed method is particularly useful for automatic spectrum processing produced from a large-scale sky survey project.

  5. Estimating forest biomass and volume using airborne laser data

    NASA Technical Reports Server (NTRS)

    Nelson, Ross; Krabill, William; Tonelli, John

    1988-01-01

    An airborne pulsed laser system was used to obtain canopy height data over a southern pine forest in Georgia in order to predict ground-measured forest biomass and timber volume. Although biomass and volume estimates obtained from the laser data were variable when compared with the corresponding ground measurements site by site, the present models are found to predict mean total tree volume within 2.6 percent of the ground value, and mean biomass within 2.0 percent. The results indicate that species stratification did not consistently improve regression relationships for four southern pine species.

  6. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    PubMed

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  7. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  8. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Brightness temperature - obtaining the physical properties of a non-equipartition plasma

    NASA Astrophysics Data System (ADS)

    Nokhrina, E. E.

    2017-06-01

    The limit on the intrinsic brightness temperature, attributed to `Compton catastrophe', has been established being 1012 K. Somewhat lower limit of the order of 1011.5 K is implied if we assume that the radiating plasma is in equipartition with the magnetic field - the idea that explained why the observed cores of active galactic nuclei (AGNs) sustained the limit lower than the `Compton catastrophe'. Recent observations with unprecedented high resolution by the RadioAstron have revealed systematic exceed in the observed brightness temperature. We propose means of estimating the degree of the non-equipartition regime in AGN cores. Coupled with the core-shift measurements, the method allows us to independently estimate the magnetic field strength and the particle number density at the core. We show that the ratio of magnetic energy to radiating plasma energy is of the order of 10-5, which means the flow in the core is dominated by the particle energy. We show that the magnetic field obtained by the brightness temperature measurements may be underestimated. We propose for the relativistic jets with small viewing angles the non-uniform magnetohydrodynamic model and obtain the expression for the magnetic field amplitude about two orders higher than that for the uniform model. These magnetic field amplitudes are consistent with the limiting magnetic field suggested by the `magnetically arrested disc' model.

  10. A GRASS GIS module to obtain an estimation of glacier behavior under climate change: A pilot study on Italian glacier

    NASA Astrophysics Data System (ADS)

    Strigaro, Daniele; Moretti, Massimiliano; Mattavelli, Matteo; Frigerio, Ivan; Amicis, Mattia De; Maggi, Valter

    2016-09-01

    The aim of this work is to integrate the Minimal Glacier Model in a Geographic Information System Python module in order to obtain spatial simulations of glacier retreat and to assess the future scenarios with a spatial representation. The Minimal Glacier Models are a simple yet effective way of estimating glacier response to climate fluctuations. This module can be useful for the scientific and glaciological community in order to evaluate glacier behavior, driven by climate forcing. The module, called r.glacio.model, is developed in a GRASS GIS (GRASS Development Team, 2016) environment using Python programming language combined with different libraries as GDAL, OGR, CSV, math, etc. The module is applied and validated on the Rutor glacier, a glacier in the south-western region of the Italian Alps. This glacier is very large in size and features rather regular and lively dynamics. The simulation is calibrated by reconstructing the 3-dimensional dynamics flow line and analyzing the difference between the simulated flow line length variations and the observed glacier fronts coming from ortophotos and DEMs. These simulations are driven by the past mass balance record. Afterwards, the future assessment is estimated by using climatic drivers provided by a set of General Circulation Models participating in the Climate Model Inter-comparison Project 5 effort. The approach devised in r.glacio.model can be applied to most alpine glaciers to obtain a first-order spatial representation of glacier behavior under climate change.

  11. iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees

    PubMed Central

    Rosenberg, Noah A.

    2012-01-01

    Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756

  12. High Resolution, Consistent Online Estimation of Potential Flood Damage in The Netherlands

    NASA Astrophysics Data System (ADS)

    Hoes, O.; Hut, R.; van Leeuwen, E.

    2014-12-01

    In the current age where water authorities no longer blindly design and maintain all infrastructure just to meet a certain standardized return period, accurate estimation of potential flood damage is important in decision making with regards to flood prevention measures. We identify three issues with current methods of estimating flood damages. Firstly, common practice is to assume that for a given land use type, damage is mainly dependent on inundation depth, and sometimes flow velocity. We recognize that depending on the type of land use inundation depth, velocity, flood duration, season, detour time and recovery time influences the amount of damage significantly. Secondly, setting stage-damage curves is usually left to an end user and can thus vary between different water authorities within a single country. What was needed at a national level is a common way of calculating flood damages, so different prevention measures can be fairly compared. Finally, most flood models use relatively large grid cells, usually in the order of 25 m2 or coarser. Especially in urban areas this leads to obvious errors: different land uses (shops, housing, park, are all classified as "urban" and treated equally. To tackle these issues we developed a web-based model which can be accessed via www.waterschadeschatter.nl (water schade schatter is Dutch for water damage estimator). It includes all necessary data sources to calculate the damage of any potential flood in the Netherlands. It uses different damage functions for different land use types, which the user can, but need not change. It runs on 0.25m2 grid cells. Both the datasets required and the amount of calculation needed is more than a desktop computer can handle. In order to start a calculation a user needs to upload the relevant flood information to the website. The calculation is divided over several multicore servers, after which the user will receive an email with a link to the results of his calculations. Our

  13. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  14. Consistency-based rectification of nonrigid registrations

    PubMed Central

    Gass, Tobias; Székely, Gábor; Goksel, Orcun

    2015-01-01

    Abstract. We present a technique to rectify nonrigid registrations by improving their group-wise consistency, which is a widely used unsupervised measure to assess pair-wise registration quality. While pair-wise registration methods cannot guarantee any group-wise consistency, group-wise approaches typically enforce perfect consistency by registering all images to a common reference. However, errors in individual registrations to the reference then propagate, distorting the mean and accumulating in the pair-wise registrations inferred via the reference. Furthermore, the assumption that perfect correspondences exist is not always true, e.g., for interpatient registration. The proposed consistency-based registration rectification (CBRR) method addresses these issues by minimizing the group-wise inconsistency of all pair-wise registrations using a regularized least-squares algorithm. The regularization controls the adherence to the original registration, which is additionally weighted by the local postregistration similarity. This allows CBRR to adaptively improve consistency while locally preserving accurate pair-wise registrations. We show that the resulting registrations are not only more consistent, but also have lower average transformation error when compared to known transformations in simulated data. On clinical data, we show improvements of up to 50% target registration error in breathing motion estimation from four-dimensional MRI and improvements in atlas-based segmentation quality of up to 65% in terms of mean surface distance in three-dimensional (3-D) CT. Such improvement was observed consistently using different registration algorithms, dimensionality (two-dimensional/3-D), and modalities (MRI/CT). PMID:26158083

  15. Device-independent point estimation from finite data and its application to device-independent property estimation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Sheng; Rosset, Denis; Zhang, Yanbao; Bancal, Jean-Daniel; Liang, Yeong-Cherng

    2018-03-01

    The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a finite number of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable, and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of any Bell inequality tailored for the specific property and the specific distribution of interest.

  16. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  17. Consistency across Repeated Eyewitness Interviews: Contrasting Police Detectives’ Beliefs with Actual Eyewitness Performance

    PubMed Central

    Krix, Alana C.; Sauerland, Melanie; Lorei, Clemens; Rispens, Imke

    2015-01-01

    In the legal system, inconsistencies in eyewitness accounts are often used to discredit witnesses’ credibility. This is at odds with research findings showing that witnesses frequently report reminiscent details (details previously unrecalled) at an accuracy rate that is nearly as high as for consistently recalled information. The present study sought to put the validity of beliefs about recall consistency to a test by directly comparing them with actual memory performance in two recall attempts. All participants watched a film of a staged theft. Subsequently, the memory group (N = 84) provided one statement immediately after the film (either with the Self-Administered Interview or free recall) and one after a one-week delay. The estimation group (N = 81) consisting of experienced police detectives estimated the recall performance of the memory group. The results showed that actual recall performance was consistently underestimated. Also, a sharp decline of memory performance between recall attempts was assumed by the estimation group whereas actual accuracy remained stable. While reminiscent details were almost as accurate as consistent details, they were estimated to be much less accurate than consistent information and as inaccurate as direct contradictions. The police detectives expressed a great concern that reminiscence was the result of suggestive external influences. In conclusion, it seems that experienced police detectives hold many implicit beliefs about recall consistency that do not correspond with actual recall performance. Recommendations for police trainings are provided. These aim at fostering a differentiated view on eyewitness performance and the inclusion of more comprehensive classes on human memory structure. PMID:25695428

  18. Consistency across repeated eyewitness interviews: contrasting police detectives' beliefs with actual eyewitness performance.

    PubMed

    Krix, Alana C; Sauerland, Melanie; Lorei, Clemens; Rispens, Imke

    2015-01-01

    In the legal system, inconsistencies in eyewitness accounts are often used to discredit witnesses' credibility. This is at odds with research findings showing that witnesses frequently report reminiscent details (details previously unrecalled) at an accuracy rate that is nearly as high as for consistently recalled information. The present study sought to put the validity of beliefs about recall consistency to a test by directly comparing them with actual memory performance in two recall attempts. All participants watched a film of a staged theft. Subsequently, the memory group (N = 84) provided one statement immediately after the film (either with the Self-Administered Interview or free recall) and one after a one-week delay. The estimation group (N = 81) consisting of experienced police detectives estimated the recall performance of the memory group. The results showed that actual recall performance was consistently underestimated. Also, a sharp decline of memory performance between recall attempts was assumed by the estimation group whereas actual accuracy remained stable. While reminiscent details were almost as accurate as consistent details, they were estimated to be much less accurate than consistent information and as inaccurate as direct contradictions. The police detectives expressed a great concern that reminiscence was the result of suggestive external influences. In conclusion, it seems that experienced police detectives hold many implicit beliefs about recall consistency that do not correspond with actual recall performance. Recommendations for police trainings are provided. These aim at fostering a differentiated view on eyewitness performance and the inclusion of more comprehensive classes on human memory structure.

  19. Comparison of Past, Present, and Future Volume Estimation Methods for Tennessee

    Treesearch

    Stanley J. Zarnoch; Alexander Clark; Ray A. Souter

    2003-01-01

    Forest Inventory and Analysis 1999 survey data for Tennessee were used to compare stem-volume estimates obtained using a previous method, the current method, and newly developed taper models that will be used in the future. Compared to the current method, individual tree volumes were consistently underestimated with the previous method, especially for the hardwoods....

  20. Simultaneous Estimation of Overall and Domain Abilities: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Song, Hao

    2009-01-01

    Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…

  1. Influence of sectioning location on age estimates from common carp dorsal spines

    USGS Publications Warehouse

    Watkins, Carson J.; Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.

    2015-01-01

    Dorsal spines have been shown to provide precise age estimates for Common CarpCyprinus carpio and are commonly used by management agencies to gain information on Common Carp populations. However, no previous studies have evaluated variation in the precision of age estimates obtained from different sectioning locations along Common Carp dorsal spines. We evaluated the precision, relative readability, and distribution of age estimates obtained from various sectioning locations along Common Carp dorsal spines. Dorsal spines from 192 Common Carp were sectioned at the base (section 1), immediately distal to the basal section (section 2), and at 25% (section 3), 50% (section 4), and 75% (section 5) of the total length of the dorsal spine. The exact agreement and within-1-year agreement among readers was highest and the coefficient of variation lowest for section 2. In general, age estimates derived from sections 2 and 3 had similar age distributions and displayed the highest concordance in age estimates with section 1. Our results indicate that sections taken at ≤ 25% of the total length of the dorsal spine can be easily interpreted and provide precise estimates of Common Carp age. The greater consistency in age estimates obtained from section 2 indicates that by using a standard sectioning location, fisheries scientists can expect age-based estimates of population metrics to be more comparable and thus more useful for understanding Common Carp population dynamics.

  2. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  3. Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.

    PubMed

    Kopka, Ryszard

    2017-12-22

    In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.

  4. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  5. Incident CTS in a large pooled cohort study: associations obtained by a Job Exposure Matrix versus associations obtained from observed exposures.

    PubMed

    Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A

    2018-03-29

    There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. One Decade of Induced Seismicity in Basel, Switzerland: A Consistent High-Resolution Catalog Obtained by Template Matching

    NASA Astrophysics Data System (ADS)

    Herrmann, M.; Kraft, T.; Tormann, T.; Scarabello, L.; Wiemer, S.

    2017-12-01

    Induced seismicity at the site of the Basel Enhanced Geothermal System (EGS) continuously decayed for six years after injection had been stopped in December 2006. Starting in May 2012, the Swiss Seismological Service was detecting a renewed increase of induced seismicity in the EGS reservoir to levels last seen in 2007 and reaching magnitudes up to ML2.0. Seismic monitoring at this EGS site is running for more than ten years now, but the details of the long-term behavior of its induced seismicity remained unexplored because a seismic event catalog that is consistent in detection sensitivity and magnitude estimation did not exist.We have created such a catalog by applying our matched filter detector to the 11-year-long seismic recordings of a borehole station at 2.7km depth. Based on 3'600 located earthquakes of the operator's borehole-network catalog, we selected about 2'500 reasonably dissimilar templates using waveform clustering. This large template set ensures an adequate coverage of the diversity of event waveforms which is due to the reservoir's highly complex fault system and the close observation distance. To cope with the increased computational demand of scanning 11-years of data with 2'500 templates, we parallelized our detector to run on a high-performance computer of the Swiss National Supercomputing Centre.We detect more than 200'000 events down to ML-2.5 during the six-day-long stimulation in December 2006 alone. Previously, only 13'000 detections found by an amplitude-threshold-based detector were known for this period. The high temporal and spatial resolution of this new catalog allows us to analyze the statistics of the induced Basel earthquakes in great detail. We resolve spatio-temporal variations of the seismicity parameters (a- and b-value) that have not been identified before and derive the first high-resolution temporal evolution of the seismic hazard for the Basel EGS reservoir.In summer 2017, our detector monitored the 10-week pressure

  7. Personality Consistency in Dogs: A Meta-Analysis

    PubMed Central

    Fratkin, Jamie L.; Sinn, David L.; Patall, Erika A.; Gosling, Samuel D.

    2013-01-01

    Personality, or consistent individual differences in behavior, is well established in studies of dogs. Such consistency implies predictability of behavior, but some recent research suggests that predictability cannot be assumed. In addition, anecdotally, many dog experts believe that ‘puppy tests’ measuring behavior during the first year of a dog's life are not accurate indicators of subsequent adult behavior. Personality consistency in dogs is an important aspect of human-dog relationships (e.g., when selecting dogs suitable for substance-detection work or placement in a family). Here we perform the first comprehensive meta-analysis of studies reporting estimates of temporal consistency of dog personality. A thorough literature search identified 31 studies suitable for inclusion in our meta-analysis. Overall, we found evidence to suggest substantial consistency (r = 0.43). Furthermore, personality consistency was higher in older dogs, when behavioral assessment intervals were shorter, and when the measurement tool was exactly the same in both assessments. In puppies, aggression and submissiveness were the most consistent dimensions, while responsiveness to training, fearfulness, and sociability were the least consistent dimensions. In adult dogs, there were no dimension-based differences in consistency. There was no difference in personality consistency in dogs tested first as puppies and later as adults (e.g., ‘puppy tests’) versus dogs tested first as puppies and later again as puppies. Finally, there were no differences in consistency between working versus non-working dogs, between behavioral codings versus behavioral ratings, and between aggregate versus single measures. Implications for theory, practice, and future research are discussed. PMID:23372787

  8. A self-consistent high- and low-frequency scattering model for cirrus

    NASA Astrophysics Data System (ADS)

    Baran, Anthony J.; Cotton, Richard; Havemann, Stephan; C.-Labonnote, Laurent; Marenco, Franco

    2013-05-01

    This paper demonstrates that an ensemble model of cirrus ice crystals that follows observed mass-dimensional power laws can predict the scattering properties of cirrus across the electromagnetic spectrum, without the need for tailor made scattering models for particular regions of the spectrum. The ensemble model predicts a mass-dimensional power law of the following form, mass ∝ D2 (where D is the maximum dimension of the ice crystal). This same mass-dimensional power law is applied across the spectrum to predict the particle size distribution (PSD) using a moment estimation parameterization of the PSD. The PSD parameterization predicts the original PSD, using in-situ estimates (bulk measurements) of the ice water content (IWC) and measurements of the in-cloud temperature; the measurements were obtained from a number of mid-latitude cirrus cases, which occurred over the U.K. during the winter and spring of 2010. It is demonstrated that the ensemble model predicts lidar backscatter estimates, at 0.355 μm, of the volume extinction coefficient and total solar optical depth to within current experimental uncertainties, hyperspectral brightness temperature measurements of the terrestrial region (800 cm-1 - 1200 cm-1) to generally well within ±1 K in the window regions, and the 35 GHz radar reflectivity to within ±2 dBZ. Therefore, for simulation of satellite radiances within general circulation models, and retrieval of cirrus properties, scattering models, which are demonstrated to be physically consistent across the electromagnetic spectrum, should be preferred.

  9. Associations between tongue movement pattern consistency and formant movement pattern consistency in response to speech behavioral modificationsa)

    PubMed Central

    Mefferd, Antje S.

    2016-01-01

    The degree of speech movement pattern consistency can provide information about speech motor control. Although tongue motor control is particularly important because of the tongue's primary contribution to the speech acoustic signal, capturing tongue movements during speech remains difficult and costly. This study sought to determine if formant movements could be used to estimate tongue movement pattern consistency indirectly. Two age groups (seven young adults and seven older adults) and six speech conditions (typical, slow, loud, clear, fast, bite block speech) were selected to elicit an age- and task-dependent performance range in tongue movement pattern consistency. Kinematic and acoustic spatiotemporal indexes (STI) were calculated based on sentence-length tongue movement and formant movement signals, respectively. Kinematic and acoustic STI values showed strong associations across talkers and moderate to strong associations for each talker across speech tasks; although, in cases where task-related tongue motor performance changes were relatively small, the acoustic STI values were poorly associated with kinematic STI values. These findings suggest that, depending on the sensitivity needs, formant movement pattern consistency could be used in lieu of direct kinematic analysis to indirectly examine speech motor control. PMID:27908069

  10. Anharmonic frequencies of CX2Y2 (X, Y = O, N, F, H, D) isomers and related systems obtained from vibrational multiconfiguration self-consistent field theory.

    PubMed

    Pfeiffer, Florian; Rauhut, Guntram

    2011-10-13

    Accurate anharmonic frequencies are provided for molecules of current research, i.e., diazirines, diazomethane, the corresponding fluorinated and deuterated compounds, their dioxygen analogs, and others. Vibrational-state energies were obtained from state-specific vibrational multiconfiguration self-consistent field theory (VMCSCF) based on multilevel potential energy surfaces (PES) generated from explicitly correlated coupled cluster, CCSD(T)-F12a, and double-hybrid density functional calculations, B2PLYP. To accelerate the vibrational structure calculations, a configuration selection scheme as well as a polynomial representation of the PES have been exploited. Because experimental data are scarce for these systems, many calculated frequencies of this study are predictions and may guide experiments to come.

  11. Estimating maneuvers for precise relative orbit determination using GPS

    NASA Astrophysics Data System (ADS)

    Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs

    2017-01-01

    Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.

  12. Model for Increasing the Power Obtained from a Thermoelectric Generator Module

    NASA Astrophysics Data System (ADS)

    Huang, Gia-Yeh; Hsu, Cheng-Ting; Yao, Da-Jeng

    2014-06-01

    We have developed a model for finding the most efficient way of increasing the power obtained from a thermoelectric generator (TEG) module with a variety of operating conditions and limitations. The model is based on both thermoelectric principles and thermal resistance circuits, because a TEG converts heat into electricity consistent with these two theories. It is essential to take into account thermal contact resistance when estimating power generation. Thermal contact resistance causes overestimation of the measured temperature difference between the hot and cold sides of a TEG in calculation of the theoretical power generated, i.e. the theoretical power is larger than the experimental power. The ratio of the experimental open-loop voltage to the measured temperature difference, the effective Seebeck coefficient, can be used to estimate the thermal contact resistance in the model. The ratio of the effective Seebeck coefficient to the theoretical Seebeck coefficient, the Seebeck coefficient ratio, represents the contact conditions. From this ratio, a relationship between performance and different variables can be developed. The measured power generated by a TEG module (TMH400302055; Wise Life Technology, Taiwan) is consistent with the result obtained by use of the model; the relative deviation is 10%. Use of this model to evaluate the most efficient means of increasing the generated power reveals that the TEG module generates 0.14 W when the temperature difference is 25°C and the Seebeck coefficient ratio is 0.4. Several methods can be used triple the amount of power generated. For example, increasing the temperature difference to 43°C generates 0.41 W power; improving the Seebeck coefficient ratio to 0.65 increases the power to 0.39 W; simultaneously increasing the temperature difference to 34°C and improving the Seebeck coefficient ratio to 0.5 increases the power to 0.41 W. Choice of the appropriate method depends on the limitations of system, the cost, and

  13. Learned perceptual associations influence visuomotor programming under limited conditions: kinematic consistency.

    PubMed

    Haffenden, Angela M; Goodale, Melvyn A

    2002-12-01

    Previous findings have suggested that visuomotor programming can make use of learned size information in experimental paradigms where movement kinematics are quite consistent from trial to trial. The present experiment was designed to test whether or not this conclusion could be generalized to a different manipulation of kinematic variability. As in previous work, an association was established between the size and colour of square blocks (e.g. red = large; yellow = small, or vice versa). Associating size and colour in this fashion has been shown to reliably alter the perceived size of two test blocks halfway in size between the large and small blocks: estimations of the test block matched in colour to the group of large blocks are smaller than estimations of the test block matched to the group of small blocks. Subjects grasped the blocks, and on other trials estimated the size of the blocks. These changes in perceived block size were incorporated into grip scaling only when movement kinematics were highly consistent from trial to trial; that is, when the blocks were presented in the same location on each trial. When the blocks were presented in different locations grip scaling remained true to the metrics of the test blocks despite the changes in perceptual estimates of block size. These results support previous findings suggesting that kinematic consistency facilitates the incorporation of learned perceptual information into grip scaling.

  14. Comparison of Sun-Induced Chlorophyll Fluorescence Estimates Obtained from Four Portable Field Spectroradiometers

    NASA Technical Reports Server (NTRS)

    Julitta, Tommaso; Corp, Lawrence A.; Rossini, Micol; Burkart, Andreas; Cogliati, Sergio; Davies, Neville; Hom, Milton; Mac Arthur, Alasdair; Middleton, Elizabeth M.; Rascher, Uwe; hide

    2016-01-01

    Remote Sensing of Sun-Induced Chlorophyll Fluorescence (SIF) is a research field of growing interest because it offers the potential to quantify actual photosynthesis and to monitor plant status. New satellite missions from the European Space Agency, such as the Earth Explorer 8 FLuorescence EXplorer (FLEX) mission-scheduled to launch in 2022 and aiming at SIF mapping-and from the National Aeronautics and Space Administration (NASA) such as the Orbiting Carbon Observatory-2 (OCO-2) sampling mission launched in July 2014, provide the capability to estimate SIF from space. The detection of the SIF signal from airborne and satellite platform is difficult and reliable ground level data are needed for calibration/validation. Several commercially available spectroradiometers are currently used to retrieve SIF in the field. This study presents a comparison exercise for evaluating the capability of four spectroradiometers to retrieve SIF. The results show that an accurate far-red SIF estimation can be achieved using spectroradiometers with an ultrafine resolution (less than 1 nm), while the red SIF estimation requires even higher spectral resolution (less than 0.5 nm). Moreover, it is shown that the Signal to Noise Ratio (SNR) plays a significant role in the precision of the far-red SIF measurements.

  15. [Reproducibility, internal consistency, and construct validity of KIDSCREEN-27 in Brazilian adolescents].

    PubMed

    Farias, José Cazuza de; Loch, Mathias Roberto; Lima, Antônio José de; Sales, Joana Marcela; Ferreira, Flávia Emília Leite de Lima

    2017-09-28

    : The objective of this two-part study was to estimate the reproducibility, internal consistency, and construct validity of KIDSCREEN-27, a questionnaire to measure health-related quality of life, in Brazilian adolescents. One study component estimated reproducibility (176 adolescents, 59.7% females, 64.7% 10 to 12 years of age), and another estimated internal consistency and validity (1,321 adolescents, 53.7% females, 56.9% 10 to 12 years of age). The studies were conducted with adolescents of both sexes in public schools in the municipality of João Pessoa, Paraíba State, Brazil. KIDSCREEN-27 consists of 27 items distributed across five domains (physical well-being, 5 items; psychological well-being, 7 items; parents and social support, 7 items; autonomy and relationship with parents, 4 items; school environment, 4 items). Reproducibility was estimated by intra-class correlation coefficient (ICC). Confirmatory factor analysis was used to assess construct validity, and composite reliability index (CRI) was used to verify the questionnaire's internal consistency. ICCs were greater than or equal to 0.70 (0.70 to 0.96). Factor loads were greater than 0.40, except for five items (0.28 to 0.39). The model's goodness-of-fit indices were adequate (χ2/df = 2.79; RMR = 0.035; RMSEA = 0.037; GFI = 0.951; AGFI = 0.941; CFI = 0.908; TLI = 0.901). CRI varied from 0.65 to 0.70 in the domains and was 0.90 for the questionnaire. KIDSCREEN-27 reached satisfactory levels of reproducibility, internal consistency, and construct validity and can be used to assess health-related quality of life in Brazilian adolescents 10 to 15 years of age.

  16. Potential application of the consistency approach for vaccine potency testing.

    PubMed

    Arciniega, J; Sirota, L A

    2012-01-01

    The Consistency Approach offers the possibility of reducing the number of animals used for a potency test. However, it is critical to assess the effect that such reduction may have on assay performance. Consistency of production, sometimes referred to as consistency of manufacture or manufacturing, is an old concept implicit in regulation, which aims to ensure the uninterrupted release of safe and effective products. Consistency of manufacture can be described in terms of process capability, or the ability of a process to produce output within specification limits. For example, the standard method for potency testing of inactivated rabies vaccines is a multiple-dilution vaccination challenge test in mice that gives a quantitative, although highly variable estimate. On the other hand, a single-dilution test that does not give a quantitative estimate, but rather shows if the vaccine meets the specification has been proposed. This simplified test can lead to a considerable reduction in the number of animals used. However, traditional indices of process capability assume that the output population (potency values) is normally distributed, which clearly is not the case for the simplified approach. Appropriate computation of capability indices for the latter case will require special statistical considerations.

  17. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  18. Inverse consistent non-rigid image registration based on robust point set matching

    PubMed Central

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number

  19. Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wildenschild, D; Berge, P A; Berryman, K G

    1999-01-15

    The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less

  20. Doubly robust matching estimators for high dimensional confounding adjustment.

    PubMed

    Antonelli, Joseph; Cefalu, Matthew; Palmer, Nathan; Agniel, Denis

    2018-05-11

    Valid estimation of treatment effects from observational data requires proper control of confounding. If the number of covariates is large relative to the number of observations, then controlling for all available covariates is infeasible. In cases where a sparsity condition holds, variable selection or penalization can reduce the dimension of the covariate space in a manner that allows for valid estimation of treatment effects. In this article, we propose matching on both the estimated propensity score and the estimated prognostic scores when the number of covariates is large relative to the number of observations. We derive asymptotic results for the matching estimator and show that it is doubly robust in the sense that only one of the two score models need be correct to obtain a consistent estimator. We show via simulation its effectiveness in controlling for confounding and highlight its potential to address nonlinear confounding. Finally, we apply the proposed procedure to analyze the effect of gender on prescription opioid use using insurance claims data. © 2018, The International Biometric Society.

  1. The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment

    NASA Technical Reports Server (NTRS)

    Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.

    1990-01-01

    The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.

  2. Estimation of Enthalpy of Formation of Liquid Transition Metal Alloys: A Modified Prescription Based on Macroscopic Atom Model of Cohesion

    NASA Astrophysics Data System (ADS)

    Raju, Subramanian; Saibaba, Saroja

    2016-09-01

    The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.

  3. Consistency of ARESE II Cloud Absorption Estimates and Sampling Issues

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Marshak, A.; Cahalan, R. F.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Data from three cloudy days (March 3, 21, 29, 2000) of the ARM Enhanced Shortwave Experiment II (ARESE II) were analyzed. Grand averages of broadband absorptance among three sets of instruments were compared. Fractional solar absorptances were approx. 0.21-0.22 with the exception of March 3 when two sets of instruments gave values smaller by approx. 0.03-0.04. The robustness of these values was investigated by looking into possible sampling problems with the aid of 500 nm spectral fluxes. Grand averages of 500 nm apparent absorptance cover a wide range of values for these three days, namely from a large positive (approx. 0.011) average for March 3, to a small negative (approximately -0.03) for March 21, to near zero (approx. 0.01) for March 29. We present evidence suggesting that a large part of the discrepancies among the three days is due to the different nature of clouds and their non-uniform sampling. Hence, corrections to the grand average broadband absorptance values may be necessary. However, application of the known correction techniques may be precarious due to the sparsity of collocated flux measurements above and below the clouds. Our analysis leads to the conclusion that only March 29 fulfills all requirements for reliable estimates of cloud absorption, that is, the presence of thick, overcast, homogeneous clouds.

  4. Precise regional baseline estimation using a priori orbital information

    NASA Technical Reports Server (NTRS)

    Lindqwister, Ulf J.; Lichten, Stephen M.; Blewitt, Geoffrey

    1990-01-01

    A solution using GPS measurements acquired during the CASA Uno campaign has resulted in 3-4 mm horizontal daily baseline repeatability and 13 mm vertical repeatability for a 729 km baseline, located in North America. The agreement with VLBI is at the level of 10-20 mm for all components. The results were obtained with the GIPSY orbit determination and baseline estimation software and are based on five single-day data arcs spanning the 20, 21, 25, 26, and 27 of January, 1988. The estimation strategy included resolving the carrier phase integer ambiguities, utilizing an optial set of fixed reference stations, and constraining GPS orbit parameters by applying a priori information. A multiday GPS orbit and baseline solution has yielded similar 2-4 mm horizontal daily repeatabilities for the same baseline, consistent with the constrained single-day arc solutions. The application of weak constraints to the orbital state for single-day data arcs produces solutions which approach the precise orbits obtained with unconstrained multiday arc solutions.

  5. Transition probabilities of Ce I obtained from Boltzmann analysis of visible and near-infrared emission spectra

    NASA Astrophysics Data System (ADS)

    Nitz, D. E.; Curry, J. J.; Buuck, M.; DeMann, A.; Mitchell, N.; Shull, W.

    2018-02-01

    We report radiative transition probabilities for 5029 emission lines of neutral cerium within the wavelength range 417-1110 nm. Transition probabilities for only 4% of these lines have been previously measured. These results are obtained from a Boltzmann analysis of two high resolution Fourier transform emission spectra used in previous studies of cerium, obtained from the digital archives of the National Solar Observatory at Kitt Peak. The set of transition probabilities used for the Boltzmann analysis are those published by Lawler et al (2010 J. Phys. B: At. Mol. Opt. Phys. 43 085701). Comparisons of branching ratios and transition probabilities for lines common to the two spectra provide important self-consistency checks and test for the presence of self-absorption effects. Estimated 1σ uncertainties for our transition probability results range from 10% to 18%.

  6. Resting State Network Estimation in Individual Subjects

    PubMed Central

    Hacker, Carl D.; Laumann, Timothy O.; Szrama, Nicholas P.; Baldassarre, Antonello; Snyder, Abraham Z.

    2014-01-01

    Resting-state functional magnetic resonance imaging (fMRI) has been used to study brain networks associated with both normal and pathological cognitive function. The objective of this work is to reliably compute resting state network (RSN) topography in single participants. We trained a supervised classifier (multi-layer perceptron; MLP) to associate blood oxygen level dependent (BOLD) correlation maps corresponding to pre-defined seeds with specific RSN identities. Hard classification of maps obtained from a priori seeds was highly reliable across new participants. Interestingly, continuous estimates of RSN membership retained substantial residual error. This result is consistent with the view that RSNs are hierarchically organized, and therefore not fully separable into spatially independent components. After training on a priori seed-based maps, we propagated voxel-wise correlation maps through the MLP to produce estimates of RSN membership throughout the brain. The MLP generated RSN topography estimates in individuals consistent with previous studies, even in brain regions not represented in the training data. This method could be used in future studies to relate RSN topography to other measures of functional brain organization (e.g., task-evoked responses, stimulation mapping, and deficits associated with lesions) in individuals. The multi-layer perceptron was directly compared to two alternative voxel classification procedures, specifically, dual regression and linear discriminant analysis; the perceptron generated more spatially specific RSN maps than either alternative. PMID:23735260

  7. Measurement Consistency from Magnetic Resonance Images

    PubMed Central

    Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.

    2010-01-01

    Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405

  8. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  9. Alternative Methods To Estimate the Number of Homeless Children and Youth. Final Report. Research Paper.

    ERIC Educational Resources Information Center

    Burt, Martha R.

    This report presents the results of a federally mandated study done to determine the best means of identifying, locating, and counting homeless children and youth, for the purpose of facilitating their successful participation in school and other educational activities. Several alternative approaches to obtaining consistent national estimates of…

  10. Estimating sediment discharge: Appendix D

    USGS Publications Warehouse

    Gray, John R.; Simões, Francisco J. M.

    2008-01-01

    Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with

  11. Consistent Small-Sample Variances for Six Gamma-Family Measures of Ordinal Association

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2009-01-01

    Gamma-family measures are bivariate ordinal correlation measures that form a family because they all reduce to Goodman and Kruskal's gamma in the absence of ties (1954). For several gamma-family indices, more than one variance estimator has been introduced. In previous research, the "consistent" variance estimator described by Cliff and…

  12. Internal Consistency, Retest Reliability, and their Implications For Personality Scale Validity

    PubMed Central

    McCrae, Robert R.; Kurtz, John E.; Yamagata, Shinji; Terracciano, Antonio

    2010-01-01

    We examined data (N = 34,108) on the differential reliability and validity of facet scales from the NEO Inventories. We evaluated the extent to which (a) psychometric properties of facet scales are generalizable across ages, cultures, and methods of measurement; and (b) validity criteria are associated with different forms of reliability. Composite estimates of facet scale stability, heritability, and cross-observer validity were broadly generalizable. Two estimates of retest reliability were independent predictors of the three validity criteria; none of three estimates of internal consistency was. Available evidence suggests the same pattern of results for other personality inventories. Internal consistency of scales can be useful as a check on data quality, but appears to be of limited utility for evaluating the potential validity of developed scales, and it should not be used as a substitute for retest reliability. Further research on the nature and determinants of retest reliability is needed. PMID:20435807

  13. Novel multireceiver communication systems configurations based on optimal estimation theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.

  14. Post-decision biases reveal a self-consistency principle in perceptual inference.

    PubMed

    Luu, Long; Stocker, Alan A

    2018-05-15

    Making a categorical judgment can systematically bias our subsequent perception of the world. We show that these biases are well explained by a self-consistent Bayesian observer whose perceptual inference process is causally conditioned on the preceding choice. We quantitatively validated the model and its key assumptions with a targeted set of three psychophysical experiments, focusing on a task sequence where subjects first had to make a categorical orientation judgment before estimating the actual orientation of a visual stimulus. Subjects exhibited a high degree of consistency between categorical judgment and estimate, which is difficult to reconcile with alternative models in the face of late, memory related noise. The observed bias patterns resemble the well-known changes in subjective preferences associated with cognitive dissonance, which suggests that the brain's inference processes may be governed by a universal self-consistency constraint that avoids entertaining 'dissonant' interpretations of the evidence. © 2018, Luu et al.

  15. Local dark matter and dark energy as estimated on a scale of ~1 Mpc in a self-consistent way

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Teerikorpi, P.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Byrd, G. G.

    2009-12-01

    Context: Dark energy was first detected from large distances on gigaparsec scales. If it is vacuum energy (or Einstein's Λ), it should also exist in very local space. Here we discuss its measurement on megaparsec scales of the Local Group. Aims: We combine the modified Kahn-Woltjer method for the Milky Way-M 31 binary and the HST observations of the expansion flow around the Local Group in order to study in a self-consistent way and simultaneously the local density of dark energy and the dark matter mass contained within the Local Group. Methods: A theoretical model is used that accounts for the dynamical effects of dark energy on a scale of ~1 Mpc. Results: The local dark energy density is put into the range 0.8-3.7ρv (ρv is the globally measured density), and the Local Group mass lies within 3.1-5.8×1012 M⊙. The lower limit of the local dark energy density, about 4/5× the global value, is determined by the natural binding condition for the group binary and the maximal zero-gravity radius. The near coincidence of two values measured with independent methods on scales differing by ~1000 times is remarkable. The mass ~4×1012 M⊙ and the local dark energy density ~ρv are also consistent with the expansion flow close to the Local Group, within the standard cosmological model. Conclusions: One should take into account the dark energy in dynamical mass estimation methods for galaxy groups, including the virial theorem. Our analysis gives new strong evidence in favor of Einstein's idea of the universal antigravity described by the cosmological constant.

  16. A test and re-estimation of Taylor's empirical capacity-reserve relationship

    USGS Publications Warehouse

    Long, K.R.

    2009-01-01

    In 1977, Taylor proposed a constant elasticity model relating capacity choice in mines to reserves. A test of this model using a very large (n = 1,195) dataset confirms its validity but obtains significantly different estimated values for the model coefficients. Capacity is somewhat inelastic with respect to reserves, with an elasticity of 0.65 estimated for open-pit plus block-cave underground mines and 0.56 for all other underground mines. These new estimates should be useful for capacity determinations as scoping studies and as a starting point for feasibility studies. The results are robust over a wide range of deposit types, deposit sizes, and time, consistent with physical constraints on mine capacity that are largely independent of technology. ?? 2009 International Association for Mathematical Geology.

  17. Consistent transport coefficients in astrophysics

    NASA Technical Reports Server (NTRS)

    Fontenla, Juan M.; Rovira, M.; Ferrofontan, C.

    1986-01-01

    A consistent theory for dealing with transport phenomena in stellar atmospheres starting with the kinetic equations and introducing three cases (LTE, partial LTE, and non-LTE) was developed. The consistent hydrodynamical equations were presented for partial-LTE, the transport coefficients defined, and a method shown to calculate them. The method is based on the numerical solution of kinetic equations considering Landau, Boltzmann, and Focker-Planck collision terms. Finally a set of results for the transport coefficients derived for a partially ionized hydrogen gas with radiation was shown, considering ionization and recombination as well as elastic collisions. The results obtained imply major changes is some types of theoretical model calculations and can resolve some important current problems concerning energy and mass balance in the solar atmosphere. It is shown that energy balance in the lower solar transition region can be fully explained by means of radiation losses and conductive flux.

  18. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: Exactly solvable two-site Hubbard model

    DOE PAGES

    Kutepov, A. L.

    2015-07-22

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ₁ from the first-order perturbation theory, and the exact vertex Γ E). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. Results obtained with the exact vertex are directly related to the present open question—which approximation is more advantageous for future implementations, GW + DMFT or QPGW +more » DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on Perturbation Theory systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.« less

  19. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    PubMed

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  20. Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton

    Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less

  1. Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional

    DOE PAGES

    Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton; ...

    2017-05-24

    Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less

  2. On a self-consistent representation of earth models, with an application to the computing of internal flattening

    NASA Astrophysics Data System (ADS)

    Denis, C.; Ibrahim, A.

    Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.

  3. Single point estimation of phenytoin dosing: a reappraisal.

    PubMed

    Koup, J R; Gibaldi, M; Godolphin, W

    1981-11-01

    A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.

  4. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  5. Multitaper scan-free spectrum estimation using a rotational shear interferometer.

    PubMed

    Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  6. Multitaper scan-free spectrum estimation using a rotational shear interferometer

    NASA Astrophysics Data System (ADS)

    Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  7. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  8. Consistent cosmic bubble embeddings

    NASA Astrophysics Data System (ADS)

    Haque, S. Shajidul; Underwood, Bret

    2017-05-01

    The Raychaudhuri equation for null rays is a powerful tool for finding consistent embeddings of cosmological bubbles in a background spacetime in a way that is largely independent of the matter content. We find that spatially flat or positively curved thin wall bubbles surrounded by a cosmological background must have a Hubble expansion that is either contracting or expanding slower than the background, which is a more stringent constraint than those obtained by the usual Israel thin-wall formalism. Similarly, a cosmological bubble surrounded by Schwarzschild space, occasionally used as a simple "swiss cheese" model of inhomogenities in an expanding universe, must be contracting (for spatially flat and positively curved bubbles) and bounded in size by the apparent horizon.

  9. Internal consistency of the Spanish health literacy test (TOFHLA-SPR) for Puerto Rico.

    PubMed

    Rivero-Méndez, Marta; Suárez, Erick; Solís-Báez, Solymar S; Hernández, Gloryvee; Cordero, Wanda; Vázquez, Irma; Medina, Zullettevy; Padilla, Raisa; Flores, Aida; Bonilla, José Luis; Holzemer, William L

    2010-03-01

    Low functional health literacy has been related to poor viral control, and lower levels of ART adherence in people living with HIV/AIDS. Research in functional health literacy among people living with HIV/AIDS in Puerto Rico (PR) is an unexplored area. The purpose of this paper is to describe how the full-length Spanish Version of the Test of Functional Health Literacy in Adults (TOFHLA-S) scale was adapted to PR. Thirty participants (women = 16, men = 14) completed a basic demographic questionnaire, the TOFHLA-S and participated in an interview. Analyses were performed to examine the information provided by participants and the internal consistency of the TOFHLA-S. The mean age was 47.7 years (range 34-77). Thirty-seven percent had less than 12 years of formal schooling and 43% reported having education above high school. Changes suggested by participants included: increasing font size from 14 to 16 points for better readability and changes/simplification of several words in order to make them colloquial and comprehensible for the PR context. The reliability coefficient obtained for this scale was strong (estimated alpha = 0.95) however, differences were observed by subtype: numeracy (estimated alpha(num) = .819 vs. comprehension (estimated alpha =. 953). Based on this process, we have adapted the original version of the TOFHLA-S and the new version of the full-length TOFHLA-S, PR is now valid for further research and testing levels of functional health literacy in a larger sample in PR.

  10. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  11. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  12. Bayesian phylogenetic estimation of fossil ages.

    PubMed

    Drummond, Alexei J; Stadler, Tanja

    2016-07-19

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth-death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the 'morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses.This article is part of the themed issue 'Dating species divergences using

  13. Bayesian phylogenetic estimation of fossil ages

    PubMed Central

    Drummond, Alexei J.; Stadler, Tanja

    2016-01-01

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth–death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the ‘morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses. This article is part of the themed issue ‘Dating species divergences

  14. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  15. Attitude Estimation or Quaternion Estimation?

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2003-01-01

    The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.

  16. Possibilities for Estimating Horizontal Electrical Currents in Active Regions on the Sun

    NASA Astrophysics Data System (ADS)

    Fursyak, Yu. A.; Abramenko, V. I.

    2017-12-01

    Part of the "free" magnetic energy associated with electrical current systems in the active region (AR) is released during solar flares. This proposition is widely accepted and it has stimulated interest in detecting electrical currents in active regions. The vertical component of an electric current in the photosphere can be found by observing the transverse magnetic field. At present, however, there are no direct methods for calculating transverse electric currents based on these observations. These calculations require information on the field vector measured simultaneously at several levels in the photosphere, which has not yet been done with solar instrumentation. In this paper we examine an approach to calculating the structure of the square of the density of a transverse electrical current based on a magnetogram of the vertical component of the magnetic field in the AR. Data obtained with the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO) for the AR of NOAA AR 11283 are used. It is shown that (1) the observed variations in the magnetic field of a sunspot and the proposed estimate of the density of an annular horizontal current around the spot are consistent with Faraday's law and (2) the resulting estimates of the magnitude of the square of the density of the horizontal current {j}_{\\perp}^2 = (0.002- 0.004) A2/m4 are consistent with previously obtained values of the density of a vertical current in the photosphere. Thus, the proposed estimate is physically significant and this method can be used to estimate the density and structure of transverse electrical currents in the photosphere.

  17. The obtaining and properties of asymmetric ion transport membrane for separating of oxygen from air

    NASA Astrophysics Data System (ADS)

    Solovieva, A. A.; Kulbakin, I. V.

    2018-04-01

    The bilayer oxygen-permeable membrane, consisting of a thin-film dense composite based on Co3O4 - 36 wt. % Bi2O3, and of a porous ceramic substrate of Co2SiO4, was synthesized and characterized. The way for obtaining of porous ceramic based on cobalt silicate was found, while the microstructure and the mechanical properties of porous ceramic were studied. Layered casting with post-pressing was used to cover the surface of porous support of Co2SiO4 by the Co3O4 - 36 wt. % Bi2O3 - based film. Transport properties of the asymmetric membrane have been studied, the kinetic features of oxygen transport have been established, and the characteristic thickness of the membrane has been estimated. The methods to prevent the high-temperature creep of ion transport membranes based on solid/molten oxides, which are the promising ones for obtaining of pure oxygen from air, are proposed and discussed.

  18. Bias of health estimates obtained from chronic disease and risk factor surveillance systems using telephone population surveys in Australia: results from a representative face-to-face survey in Australia from 2010 to 2013.

    PubMed

    Dal Grande, Eleonora; Chittleborough, Catherine R; Campostrini, Stefano; Taylor, Anne W

    2016-04-18

    Emerging communication technologies have had an impact on population-based telephone surveys worldwide. Our objective was to examine the potential biases of health estimates in South Australia, a state of Australia, obtained via current landline telephone survey methodologies and to report on the impact of mobile-only household on household surveys. Data from an annual multi-stage, systematic, clustered area, face-to-face population survey, Health Omnibus Survey (approximately 3000 interviews annually), included questions about telephone ownership to assess the population that were non-contactable by current telephone sampling methods (2006 to 2013). Univariable analyses (2010 to 2013) and trend analyses were conducted for sociodemographic and health indicator variables in relation to telephone status. Relative coverage biases (RCB) of two hypothetical telephone samples was undertaken by examining the prevalence estimates of health status and health risk behaviours (2010 to 2013): directory-listed numbers, consisting mainly of landline telephone numbers and a small proportion of mobile telephone numbers; and a random digit dialling (RDD) sample of landline telephone numbers which excludes mobile-only households. Telephone (landline and mobile) coverage in South Australia is very high (97%). Mobile telephone ownership increased slightly (7.4%), rising from 89.7% in 2006 to 96.3% in 2013; mobile-only households increased by 431% over the eight year period from 5.2% in 2006 to 27.6% in 2013. Only half of the households have either a mobile or landline number listed in the telephone directory. There were small differences in the prevalence estimates for current asthma, arthritis, diabetes and obesity between the hypothetical telephone samples and the overall sample. However, prevalence estimate for diabetes was slightly underestimated (RCB value of -0.077) in 2013. Mixed RCB results were found for having a mental health condition for both telephone samples. Current

  19. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part I: structural analysis

    USGS Publications Warehouse

    Hevesi, Joseph A.; Istok, Jonathan D.; Flint, Alan L.

    1992-01-01

    Values of average annual precipitation (AAP) are desired for hydrologic studies within a watershed containing Yucca Mountain, Nevada, a potential site for a high-level nuclear-waste repository. Reliable values of AAP are not yet available for most areas within this watershed because of a sparsity of precipitation measurements and the need to obtain measurements over a sufficient length of time. To estimate AAP over the entire watershed, historical precipitation data and station elevations were obtained from a network of 62 stations in southern Nevada and southeastern California. Multivariate geostatistics (cokriging) was selected as an estimation method because of a significant (p = 0.05) correlation of r = .75 between the natural log of AAP and station elevation. A sample direct variogram for the transformed variable, TAAP = ln [(AAP) 1000], was fitted with an isotropic, spherical model defined by a small nugget value of 5000, a range of 190 000 ft, and a sill value equal to the sample variance of 163 151. Elevations for 1531 additional locations were obtained from topographic maps to improve the accuracy of cokriged estimates. A sample direct variogram for elevation was fitted with an isotropic model consisting of a nugget value of 5500 and three nested transition structures: a Gaussian structure with a range of 61 000 ft, a spherical structure with a range of 70 000 ft, and a quasi-stationary, linear structure. The use of an isotropic, stationary model for elevation was considered valid within a sliding-neighborhood radius of 120 000 ft. The problem of fitting a positive-definite, nonlinear model of coregionalization to an inconsistent sample cross variogram for TAAP and elevation was solved by a modified use of the Cauchy-Schwarz inequality. A selected cross-variogram model consisted of two nested structures: a Gaussian structure with a range of 61 000 ft and a spherical structure with a range of 190 000 ft. Cross validation was used for model selection and for

  20. Short communication: Development of an equation for estimating methane emissions of dairy cows from milk Fourier transform mid-infrared spectra by using reference data obtained exclusively from respiration chambers.

    PubMed

    Vanlierde, A; Soyeurt, H; Gengler, N; Colinet, F G; Froidmont, E; Kreuzer, M; Grandl, F; Bell, M; Lund, P; Olijhoek, D W; Eugène, M; Martin, C; Kuhla, B; Dehareng, F

    2018-05-09

    Evaluation and mitigation of enteric methane (CH 4 ) emissions from ruminant livestock, in particular from dairy cows, have acquired global importance for sustainable, climate-smart cattle production. Based on CH 4 reference measurements obtained with the SF 6 tracer technique to determine ruminal CH 4 production, a current equation permits evaluation of individual daily CH 4 emissions of dairy cows based on milk Fourier transform mid-infrared (FT-MIR) spectra. However, the respiration chamber (RC) technique is considered to be more accurate than SF 6 to measure CH 4 production from cattle. This study aimed to develop an equation that allows estimating CH 4 emissions of lactating cows recorded in an RC from corresponding milk FT-MIR spectra and to challenge its robustness and relevance through validation processes and its application on a milk spectral database. This would permit confirming the conclusions drawn with the existing equation based on SF 6 reference measurements regarding the potential to estimate daily CH 4 emissions of dairy cows from milk FT-MIR spectra. A total of 584 RC reference CH 4 measurements (mean ± standard deviation of 400 ± 72 g of CH 4 /d) and corresponding standardized milk mid-infrared spectra were obtained from 148 individual lactating cows between 7 and 321 d in milk in 5 European countries (Germany, Switzerland, Denmark, France, and Northern Ireland). The developed equation based on RC measurements showed calibration and cross-validation coefficients of determination of 0.65 and 0.57, respectively, which is lower than those obtained earlier by the equation based on 532 SF 6 measurements (0.74 and 0.70, respectively). This means that the RC-based model is unable to explain the variability observed in the corresponding reference data as well as the SF 6 -based model. The standard errors of calibration and cross-validation were lower for the RC model (43 and 47 g/d vs. 66 and 70 g/d for the SF 6 version, respectively), indicating

  1. Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions

    NASA Astrophysics Data System (ADS)

    Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.

    2017-01-01

    Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.

  2. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  3. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information.

    PubMed

    Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.

  4. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information

    PubMed Central

    Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290

  5. On the role of dimensionality and sample size for unstructured and structured covariance matrix estimation

    NASA Technical Reports Server (NTRS)

    Morgera, S. D.; Cooper, D. B.

    1976-01-01

    The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.

  6. Consistency.

    PubMed

    Levin, Roger

    2005-09-01

    Consistency is a reflection of having the right model, the right systems and the right implementation. As Vince Lombardi, the legendary coach of the Green Bay Packers, once said, "You don't do things right once in a while. You do them right all the time." To provide the ultimate level of patient care, reduce stress for the dentist and staff members and ensure high practice profitability, consistency is key.

  7. Nonparametric Discrete Survival Function Estimation with Uncertain Endpoints Using an Internal Validation Subsample

    PubMed Central

    Zee, Jarcy; Xie, Sharon X.

    2015-01-01

    Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510

  8. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  9. Obtaining appropriate interval estimates for age when multiple indicators are used: evaluation of an ad-hoc procedure.

    PubMed

    Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick

    2016-03-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.

  10. Internal Consistency of the Spanish Health Literacy Test (TOFHLA-SPR) for Puerto Rico

    PubMed Central

    Rivero-Méndez, Marta; Suárez, Erick; Solís-Báez, Solymar S.; Hernández, Gloryvee; Cordero, Wanda; Vázquez, Irma; Medina, Zullettevy; Padilla, Raisa; Flores, Aida; Bonilla, José Luis; Holzemer, William L.

    2010-01-01

    Background Low functional health literacy has been related to poor viral control, and lower levels of ART adherence in people living with HIV/AIDS. Research in functional health literacy among people living with HIV/AIDS in Puerto Rico (PR) is an unexplored area. The purpose of this paper is to describe how the full-length Spanish Version of the Test of Functional Health Literacy in Adults (TOFHLA-S) scale was adapted to PR. Methods Thirty participants (women = 16, men = 14) completed a basic demographic questionnaire, the TOFHLA-S and participated in an interview. Analyses were performed to examine the information provided by participants and the internal consistency of the TOFHLA-S. Results The mean age was 47.7 years (range 34-77). Thirty-seven percent had less than 12 years of formal schooling and 43% reported having education above high school. Changes suggested by participants included: increasing font size from 14 to 16 points for better readability and changes/simplification of several words in order to make them colloquial and comprehensible for the PR context. The reliability coefficient obtained for this scale was strong (estimated alpha = 0.95) however, differences were observed by subtype: numeracy (estimated alphanum = .819 vs. comprehension (estimated alpha =. 953). Conclusions Based on this process, we have adapted the original version of the TOFHLA-S and the new version of the full-length TOFHLA-S, PR is now valid for further research and testing levels of functional health literacy in a larger sample in PR. PMID:20222334

  11. Self-consistent linear response for the spin-orbit interaction related properties

    NASA Astrophysics Data System (ADS)

    Solovyev, I. V.

    2014-07-01

    In many cases, the relativistic spin-orbit (SO) interaction can be regarded as a small perturbation to the electronic structure of solids and treated using regular perturbation theory. The major obstacle on this route comes from the fact that the SO interaction can also polarize the electron system and produce some additional contributions to the perturbation theory expansion, which arise from the electron-electron interactions in the same order of the SO coupling. In electronic structure calculations, it may even lead to the necessity of abandoning the perturbation theory and returning to the original self-consistent solution of Kohn-Sham-like equations with the effective potential v̂, incorporating simultaneously the effects of the electron-electron interactions and the SO coupling, even though the latter is small. In this work, we present the theory of self-consistent linear response (SCLR), which allows us to get rid of numerical self-consistency and formulate the last step fully analytically in the first order of the SO coupling. This strategy is applied to the unrestricted Hartree-Fock solution of an effective Hubbard-type model, derived from the first-principles electronic structure calculations in the basis of Wannier functions for the magnetically active states. We show that by using v̂, obtained in SCLR, one can successfully reproduce results of ordinary self-consistent calculations for the orbital magnetization and other properties, which emerge in the first order of the SO coupling. Particularly, SCLR appears to be an extremely useful approach for calculations of antisymmetric Dzyaloshinskii-Moriya (DM) interactions based on the magnetic force theorem, where only by using the total perturbation one can make a reliable estimate for the DM parameters. Furthermore, due to the powerful 2n+1 theorem, the SCLR theory allows us to obtain the total energy change up to the third order of the SO coupling, which can be used in calculations of magnetic anisotropy

  12. Reinforcing flood-risk estimation.

    PubMed

    Reed, Duncan W

    2002-07-15

    Flood-frequency estimation is inherently uncertain. The practitioner applies a combination of gauged data, scientific method and hydrological judgement to derive a flood-frequency curve for a particular site. The resulting estimate can be thought fully satisfactory only if it is broadly consistent with all that is reliably known about the flood-frequency behaviour of the river. The paper takes as its main theme the search for information to strengthen a flood-risk estimate made from peak flows alone. Extra information comes in many forms, including documentary and monumental records of historical floods, and palaeological markers. Meteorological information is also useful, although rainfall rarity is difficult to assess objectively and can be a notoriously unreliable indicator of flood rarity. On highly permeable catchments, groundwater levels present additional data. Other types of information are relevant to judging hydrological similarity when the flood-frequency estimate derives from data pooled across several catchments. After highlighting information sources, the paper explores a second theme: that of consistency in flood-risk estimates. Following publication of the Flood estimation handbook, studies of flood risk are now using digital catchment data. Automated calculation methods allow estimates by standard methods to be mapped basin-wide, revealing anomalies at special sites such as river confluences. Such mapping presents collateral information of a new character. Can this be used to achieve flood-risk estimates that are coherent throughout a river basin?

  13. Estimation of kinetic parameters from list-mode data using an indirect apporach

    NASA Astrophysics Data System (ADS)

    Ortiz, Joseph Christian

    This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure

  14. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  15. Converging ligand-binding free energies obtained with free-energy perturbations at the quantum mechanical level.

    PubMed

    Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf

    2016-06-30

    In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  16. Converging ligand‐binding free energies obtained with free‐energy perturbations at the quantum mechanical level

    PubMed Central

    Olsson, Martin A.; Söderhjelm, Pär

    2016-01-01

    In this article, the convergence of quantum mechanical (QM) free‐energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa‐acid deep‐cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158–224 atoms). We use single‐step exponential averaging (ssEA) and the non‐Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi‐empirical PM6‐DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free‐energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27117350

  17. Irradiation of diets fed to captive exotic felids: microbial destruction, consumption, and fecal consistency.

    PubMed

    Crissey, S D; Slifka, K A; Jacobsen, K L; Shumway, P J; Mathews, R; Harper, J

    2001-09-01

    Two frozen, raw horse meat-based diets fed to captive exotic felids at Brookfield Zoo were irradiated to determine the extent of microbial destruction and whether radiation treatment would affect consumption and/or fecal consistency in exotic cats. Fifteen cats, two African lions (Panthera leo), two Amur tigers (Panthera tigris altaica), one Amur leopard (Panthera pardus orientalis), two clouded leopards (Neofelis nebulosa), two caracals (Felis caracal), one bobcat (Felis rufus), and five fishing cats (Felis viverrinus), housed at Brookfield Zoo were fed nonirradiated and irradiated raw diets containing horse meat with cereal products and fortified with nutrients: Nebraska Brand Feline and/or Canine Diet (Animal Spectrum, North Platte, Nebraska 69103, USA). Baseline data were obtained during a 2-wk control period (nonirradiated diets), which was followed by a 4-wk period of feeding comparable irradiated diets. Feed intake and fecal consistency data were collected. An estimated radiation dose range of 0.5-3.9 kilograys reduced most microbial populations, depending on specific diet and microbe type. Irradiation had no overall effect on either feed consumption or fecal consistency in captive exotic cats, regardless of species, age, sex, or body mass. Data indicate that irradiation of frozen horse meat-based diets (packaged in 2.2-kg portions) result in microbial destruction in these products but that product storage time between irradiation and sampling may also affect microbial reduction. However, irradiation would be an appropriate method for reducing potentially pathologic bacteria in raw meat fed to exotic cats.

  18. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  19. Methods matter: considering locomotory mode and respirometry technique when estimating metabolic rates of fishes

    PubMed Central

    Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.

    2016-01-01

    Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471

  20. keV-Scale sterile neutrino sensitivity estimation with time-of-flight spectroscopy in KATRIN using self-consistent approximate Monte Carlo

    NASA Astrophysics Data System (ADS)

    Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian

    2018-03-01

    We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.

  1. Timescales alter the inferred strength and temporal consistency of intraspecific diet specialization

    USGS Publications Warehouse

    Novak, Mark; Tinker, M. Tim

    2015-01-01

    Many populations consist of individuals that differ substantially in their diets. Quantification of the magnitude and temporal consistency of such intraspecific diet variation is needed to understand its importance, but the extent to which different approaches for doing so reflect instantaneous vs. time-aggregated measures of individual diets may bias inferences. We used direct observations of sea otter individuals (Enhydra lutris nereis) to assess how: (1) the timescale of sampling, (2) under-sampling, and (3) the incidence- vs. frequency-based consideration of prey species affect the inferred strength and consistency of intraspecific diet variation. Analyses of feeding observations aggregated over hourly to annual intervals revealed a substantial bias associated with time aggregation that decreases the inferred magnitude of specialization and increases the inferred consistency of individuals’ diets. Time aggregation also made estimates of specialization more sensitive to the consideration of prey frequency, which decreased estimates relative to the use of prey incidence; time aggregation did not affect the extent to which under-sampling contributed to its overestimation. Our analyses demonstrate the importance of studying intraspecific diet variation with an explicit consideration of time and thereby suggest guidelines for future empirical efforts. Failure to consider time will likely produce inconsistent predictions regarding the effects of intraspecific variation on predator–prey interactions.

  2. Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory

    ERIC Educational Resources Information Center

    Lee, Won-Chan

    2010-01-01

    In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…

  3. Local Estimators for Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh

    2011-01-01

    A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.

  4. Usher syndrome: definition and estimate of prevalence from two high-risk populations.

    PubMed

    Boughman, J A; Vernon, M; Shaver, K A

    1983-01-01

    The Usher Syndrome (US) refers to the combined neurosensory deficits of profound hearing impairment and retinitis pigmentosa. We have obtained information on 600 cases of deaf-blindness from the registry of the Helen Keller National Center for Deaf-Blind Youths and Adults (HKNC). Of these, 54% met the diagnostic criteria of US, although only 23.8% were so diagnosed. More extensive analysis of 189 Usher clients from HKNC showed an excess of males, some variability in audiograms, and wide ophthalmologic variation. Genetic analysis of 113 sibships showed a segregation ratio consistent with recessive inheritance. The Acadian population of Louisiana has a high frequency of US which contributes significantly to the deaf population of the state. Among 48 cases from the Louisiana School for the Deaf, there was an excess of males, more variability in audiograms than expected, and an increased segregation ratio in the 26 informative sibships. Estimates of prevalence obtained using registry data and statistics from Louisiana clearly suggest that the previous estimate of 2.4 per 100,000 is too low for the United States. Recognizing problems with ascertainment, our prevalence estimate of 4.4 per 100,000 is still considered quite conservative.

  5. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.

  6. Shape encoding consistency across colors in primate V4

    PubMed Central

    Bushnell, Brittany N.

    2012-01-01

    Neurons in primate cortical area V4 are sensitive to the form and color of visual stimuli. To determine whether form selectivity remains consistent across colors, we studied the responses of single V4 neurons in awake monkeys to a set of two-dimensional shapes presented in two different colors. For each neuron, we chose two colors that were visually distinct and that evoked reliable and different responses. Across neurons, the correlation coefficient between responses in the two colors ranged from −0.03 to 0.93 (median 0.54). Neurons with highly consistent shape responses, i.e., high correlation coefficients, showed greater dispersion in their responses to the different shapes, i.e., greater shape selectivity, and also tended to have less eccentric receptive field locations; among shape-selective neurons, shape consistency ranged from 0.16 to 0.93 (median 0.63). Consistency of shape responses was independent of the physical difference between the stimulus colors used and the strength of neuronal color tuning. Finally, we found that our measurement of shape response consistency was strongly influenced by the number of stimulus repeats: consistency estimates based on fewer than 10 repeats were substantially underestimated. In conclusion, our results suggest that neurons that are likely to contribute to shape perception and discrimination exhibit shape responses that are largely consistent across colors, facilitating the use of simpler algorithms for decoding shape information from V4 neuronal populations. PMID:22673324

  7. 21 CFR 1315.34 - Obtaining an import quota.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Obtaining an import quota. 1315.34 Section 1315.34 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE IMPORTATION AND PRODUCTION QUOTAS... imports, the estimated medical, scientific, and industrial needs of the United States, the establishment...

  8. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  9. Toward a consistent modeling framework to assess multi-sectoral climate impacts.

    PubMed

    Monier, Erwan; Paltsev, Sergey; Sokolov, Andrei; Chen, Y-H Henry; Gao, Xiang; Ejaz, Qudsia; Couzo, Evan; Schlosser, C Adam; Dutkiewicz, Stephanie; Fant, Charles; Scott, Jeffery; Kicklighter, David; Morris, Jennifer; Jacoby, Henry; Prinn, Ronald; Haigh, Martin

    2018-02-13

    Efforts to estimate the physical and economic impacts of future climate change face substantial challenges. To enrich the currently popular approaches to impact analysis-which involve evaluation of a damage function or multi-model comparisons based on a limited number of standardized scenarios-we propose integrating a geospatially resolved physical representation of impacts into a coupled human-Earth system modeling framework. Large internationally coordinated exercises cannot easily respond to new policy targets and the implementation of standard scenarios across models, institutions and research communities can yield inconsistent estimates. Here, we argue for a shift toward the use of a self-consistent integrated modeling framework to assess climate impacts, and discuss ways the integrated assessment modeling community can move in this direction. We then demonstrate the capabilities of such a modeling framework by conducting a multi-sectoral assessment of climate impacts under a range of consistent and integrated economic and climate scenarios that are responsive to new policies and business expectations.

  10. New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data

    NASA Astrophysics Data System (ADS)

    Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.

    2009-11-01

    A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.

  11. Estimation of surface water storage in the Congo Basin

    NASA Astrophysics Data System (ADS)

    O'Loughlin, F.; Neal, J. C.; Schumann, G.; Beighley, E.; Bates, P. D.

    2015-12-01

    For many large river basins, especially in Africa, the lack of access to in-situ measurements, and the large areas involved, make modelling of water storage and runoff difficult. However, remote sensing datasets are useful alternative sources of information, which overcome these issues. In this study, we focus on the Congo Basin and, in particular, the cuvette central. Despite being the second largest river basin on earth and containing a large percentage of the world's tropical wetlands and forest, little is known about this basin's hydrology. Combining discharge estimates from in-situ measurements and outputs from a hydrological model, we build the first large-scale hydrodynamic model for this region to estimate the volume of water stored in the corresponding floodplains and to investigate how important these floodplains are to the behaviour of the overall system. This hydrodynamic model covers an area over 1.6 million square kilometres and 13 thousand kilometres of rivers and is calibrated to water surface heights at 33 virtual gauging stations obtained from ESA's Envisat satellite. Our results show that the use of different sources of discharge estimations and calibration via Envisat observations can produce accurate water levels and downstream discharges. Our model produced un-biased (bias =-0.08 m), sub-metre Root Mean Square Error (RMSE =0.862 m) with a Nash-Sutcliffe efficiency greater than 80% (NSE =0.81). The spatial-temporal variations in our simulated inundated areas are consistent with the pattern obtained from satellites. Overall, we find a high correlation coefficient (R =0.88) between our modelled inundated areas and those estimated from satellites.

  12. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  13. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is

  14. Applying linear programming to estimate fluxes in ecosystems or food webs: An example from the herpetological assemblage of the freshwater Everglades

    USGS Publications Warehouse

    Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.

    2001-01-01

    We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.

  15. Consistency of forest presence and biomass predictions modeled across overlapping spatial and temporal extents

    Treesearch

    Mark D. Nelson; Sean Healey; W. Keith Moser; J.G. Masek; Warren Cohen

    2011-01-01

    We assessed the consistency across space and time of spatially explicit models of forest presence and biomass in southern Missouri, USA, for adjacent, partially overlapping satellite image Path/Rows, and for coincident satellite images from the same Path/Row acquired in different years. Such consistency in satellite image-based classification and estimation is critical...

  16. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  17. Validation of temporal and spatial consistency of facility- and speed-specific vehicle-specific power distributions for emission estimation: A case study in Beijing, China.

    PubMed

    Zhai, Zhiqiang; Song, Guohua; Lu, Hongyu; He, Weinan; Yu, Lei

    2017-09-01

    Vehicle-specific power (VSP) has been found to be highly correlated with vehicle emissions. It is used in many studies on emission modeling such as the MOVES (Motor Vehicle Emissions Simulator) model. The existing studies develop specific VSP distributions (or OpMode distribution in MOVES) for different road types and various average speeds to represent the vehicle operating modes on road. However, it is still not clear if the facility- and speed-specific VSP distributions are consistent temporally and spatially. For instance, is it necessary to update periodically the database of the VSP distributions in the emission model? Are the VSP distributions developed in the city central business district (CBD) area applicable to its suburb area? In this context, this study examined the temporal and spatial consistency of the facility- and speed-specific VSP distributions in Beijing. The VSP distributions in different years and in different areas are developed, based on real-world vehicle activity data. The root mean square error (RMSE) is employed to quantify the difference between the VSP distributions. The maximum differences of the VSP distributions between different years and between different areas are approximately 20% of that between different road types. The analysis of the carbon dioxide (CO 2 ) emission factor indicates that the temporal and spatial differences of the VSP distributions have no significant impact on vehicle emission estimation, with relative error of less than 3%. The temporal and spatial differences have no significant impact on the development of the facility- and speed-specific VSP distributions for the vehicle emission estimation. The database of the specific VSP distributions in the VSP-based emission models can maintain in terms of time. Thus, it is unnecessary to update the database regularly, and it is reliable to use the history vehicle activity data to forecast the emissions in the future. In one city, the areas with less data can still

  18. Ranging through Gabor logons-a consistent, hierarchical approach.

    PubMed

    Chang, C; Chatterjee, S

    1993-01-01

    In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.

  19. Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.

    2017-10-01

    When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model

  20. LNOx Estimates Directly from LIS Data

    NASA Astrophysics Data System (ADS)

    Koshak, W. J.; Vant-hull, B.; McCaul, E.

    2014-12-01

    Nitrogen oxides (NOx = NO + NO2) are known to indirectly influence climate since they affect the concentration of both atmospheric ozone (O3) and hydroxyl radicals (OH). In addition, lightning NOx (LNOx) is the most important source of NOx in the upper troposphere (particularly in the tropics). It is difficult to estimate LNOx because it is not easy to make measurements near the lightning channel, and the various NOx-producing mechanisms within a lightning flash are not fully understood. A variety of methods have been used to estimate LNOx production [e.g., in-situ observations, combined ground-based VHF lightning mapping and VLF/LF lightning locating observations, indirect retrievals using satellite Ozone Monitoring Instrument (OMI) observations, theoretical considerations, laboratory spark measurements, and rocket triggered lightning measurements]. The present study introduces a new approach for estimating LNOx that employs Lightning Imaging Sensor (LIS) data. LIS optical measurements are used to directly estimate the total energy of a flash; the total flash energy is then converted to LNOx production (in moles) by multiplying by a thermo-chemical yield. Hence, LNOx estimates on a flash-by-flash basis are obtained. A Lightning NOx Indicator (LNI) is computed by summing up the LIS-derived LNOx contributions from a region over a particular analysis period. Larger flash optical areas are consistent with longer channel length and/or more energetic channels, and hence more NOx production. Brighter flashes are consistent with more energetic channels, and hence more NOx production. The location of the flash within the thundercloud and the optical scattering characteristics of the thundercloud are complicating factors. LIS data for the years 2003-2013 were analyzed, and geographical plots of the time-evolution of the LNI over the southern tier states (i.e. upto 38o N) of CONUS were determined. Overall, the LNI trends downward over the 11 yr analysis period. The LNI has

  1. MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim

    2018-01-01

    A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.

  2. Electrofishing distance needed to estimate consistent Index of Biotic Integrity (IBI) scores in raftable Oregon rivers

    EPA Science Inventory

    An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...

  3. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  4. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  5. Balancing Score Adjusted Targeted Minimum Loss-based Estimation

    PubMed Central

    Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.

    2015-01-01

    Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539

  6. Simulating the Surface Relief of Nanoaerosols Obtained via the Rapid Cooling of Droplets

    NASA Astrophysics Data System (ADS)

    Tovbin, Yu. K.; Zaitseva, E. S.; Rabinovich, A. B.

    2018-03-01

    An approach is formulated that theoretically describes the structure of a rough surface of small aerosol particles obtained from a liquid droplet upon its rapid cooling. The problem consists of two stages. In the first stage, a concentration profile of the droplet-vapor transition region is calculated. In the second stage, local fractions of vacant sites and their pairs are found on the basis of this profile, and the rough structure of a frozen droplet surface transitioning to the solid state is calculated. Model parameters are the temperature of the initial droplet and those of the lateral interaction between droplet atoms. Information on vacant sites inside the region of transition allows us to identify adsorption centers and estimate the monolayer capacity, compared to that of the total space of the region of transition. The approach is oriented toward calculating adsorption isotherms on real surfaces.

  7. Nonparametric estimation of the multivariate survivor function: the multivariate Kaplan-Meier estimator.

    PubMed

    Prentice, Ross L; Zhao, Shanshan

    2018-01-01

    The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.

  8. Estimating the value of life and injury for pedestrians using a stated preference framework.

    PubMed

    Niroomand, Naghmeh; Jenkins, Glenn P

    2017-09-01

    The incidence of pedestrian death over the period 2010 to 2014 per 1000,000 in North Cyprus is about 2.5 times that of the EU, with 10.5 times more pedestrian road injuries than deaths. With the prospect of North Cyprus entering the EU, many investments need to be undertaken to improve road safety in order to reach EU benchmarks. We conducted a stated choice experiment to identify the preferences and tradeoffs of pedestrians in North Cyprus for improved walking times, pedestrian costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers chose. These were used to estimate the individuals' willingness to pay (WTP) to save walking time and to avoid pedestrian fatalities and injuries. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of an injury (VI) prevented, and the value per hour of walking time saved. The estimate of the VSL was €699,434 and the estimate of VI was €20,077. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. The estimated value of time to pedestrians is €7.20 per person hour. The ratio of deaths to injuries is much higher for pedestrians than for road accidents, and this is completely consistent with the higher estimated WTP to avoid a pedestrian accident than to avoid a car accident. The value of time of €7.20 is quite high relative to the wages earned. Findings provide a set of information on the VRR for fatalities and injuries and the value of pedestrian time that is critical for conducing ex ante appraisals of investments to improve pedestrian safety. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  9. A Height Estimation Approach for Terrain Following Flights from Monocular Vision.

    PubMed

    Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz

    2016-12-06

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  10. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    PubMed Central

    Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz

    2016-01-01

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424

  11. Estimation of base temperatures for nine weed species.

    PubMed

    Steinmaus, S J; Prather, T S; Holt, J S

    2000-02-01

    Experiments were conducted to test several methods for estimating low temperature thresholds for seed germination. Temperature responses of nine weeds common in annual agroecosystems were assessed in temperature gradient experiments. Species included summer annuals (Amaranthus albus, A. palmeri, Digitaria sanguinalis, Echinochloa crus-galli, Portulaca oleracea, and Setaria glauca), winter annuals (Hirschfeldia incana and Sonchus oleraceus), and Conyza canadensis, which is classified as a summer or winter annual. The temperature below which development ceases (Tbase) was estimated as the x-intercept of four conventional germination rate indices regressed on temperature, by repeated probit analysis, and by a mathematical approach. An overall Tbase estimate for each species was the average across indices weighted by the reciprocal of the variance associated with the estimate. Germination rates increased linearly with temperature between 15 degrees C and 30 degrees C for all species. Consistent estimates of Tbase were obtained for most species using several indices. The most statistically robust and biologically relevant method was the reciprocal time to median germination, which can also be used to estimate other biologically meaningful parameters. The mean Tbase for summer annuals (13.8 degrees C) was higher than that for winter annuals (8.3 degrees C). The two germination response characteristics, Tbase and slope (rate), influence a species' germination behaviour in the field since the germination inhibiting effects of a high Tbase may be offset by the germination promoting effects of a rapid germination response to temperature. Estimates of Tbase may be incorporated into predictive thermal time models to assist weed control practitioners in making management decisions.

  12. Guided color consistency optimization for image mosaicking

    NASA Astrophysics Data System (ADS)

    Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li

    2018-01-01

    This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.

  13. On-line, adaptive state estimator for active noise control

    NASA Technical Reports Server (NTRS)

    Lim, Tae W.

    1994-01-01

    Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control

  14. Consistency analysis and correction of ground-based radar observations using space-borne radar

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Zhu, Yiqing; Wang, Zhenhui; Wang, Yadong

    2018-04-01

    The lack of an accurate determination of radar constant can introduce biases in ground-based radar (GR) reflectivity factor data, and lead to poor consistency of radar observations. The geometry-matching method was applied to carry out spatial matching of radar data from the Precipitation Radar (PR) on board the Tropical Rainfall Measuring Mission (TRMM) satellite to observations from a GR deployed at Nanjing, China, in their effective sampling volume, with 250 match-up cases obtained from January 2008 to October 2013. The consistency of the GR was evaluated with reference to the TRMM PR, whose stability is established. The results show that the below-bright-band-height data of the Nanjing radar can be split into three periods: Period I from January 2008 to March 2010, Period II from March 2010 to May 2013, and Period III from May 2013 to October 2013. There are distinct differences in overall reflectivity factor between the three periods, and the overall reflectivity factor in period II is smaller by a factor of over 3 dB than in periods I and III, although the overall reflectivity within each period remains relatively stable. Further investigation shows that in period II the difference between the GR and PR observations changed with echo intensity. A best-fit relation between the two radar reflectivity factors provides a linear correction that is applied to the reflectivity of the Nanjing radar, and which is effective in improving its consistency. Rain-gauge data were used to verify the correction, and the estimated precipitation based on the corrected GR reflectivity data was closer to the rain-gauge observations than that without correction.

  15. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the

  16. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  17. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  18. Testing a Nursing-Specific Model of Electronic Patient Record documentation with regard to information completeness, comprehensiveness and consistency.

    PubMed

    von Krogh, Gunn; Nåden, Dagfinn; Aasland, Olaf Gjerløw

    2012-10-01

    To present the results from the test site application of the documentation model KPO (quality assurance, problem solving and caring) designed to impact the quality of nursing information in electronic patient record (EPR). The KPO model was developed by means of consensus group and clinical testing. Four documentation arenas and eight content categories, nursing terminologies and a decision-support system were designed to impact the completeness, comprehensiveness and consistency of nursing information. The testing was performed in a pre-test/post-test time series design, three times at a one-year interval. Content analysis of nursing documentation was accomplished through the identification, interpretation and coding of information units. Data from the pre-test and post-test 2 were subjected to statistical analyses. To estimate the differences, paired t-tests were used. At post-test 2, the information is found to be more complete, comprehensive and consistent than at pre-test. The findings indicate that documentation arenas combining work flow and content categories deduced from theories on nursing practice can influence the quality of nursing information. The KPO model can be used as guide when shifting from paper-based to electronic-based nursing documentation with the aim of obtaining complete, comprehensive and consistent nursing information. © 2012 Blackwell Publishing Ltd.

  19. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  20. Oil and gas reserves estimates

    USGS Publications Warehouse

    Harrell, R.; Gajdica, R.; Elliot, D.; Ahlbrandt, T.S.; Khurana, S.

    2005-01-01

    This article is a summary of a panel session at the 2005 Offshore Technology Conference. Oil and gas reserves estimates are further complicated with the expanding importance of the worldwide deepwater arena. These deepwater reserves can be analyzed, interpreted, and conveyed in a consistent, reliable way to investors and other stakeholders. Continually improving technologies can lead to improved estimates of production and reserves, but the estimates are not necessarily recognized by regulatory authorities as an indicator of "reasonable certainty," a term used since 1964 to describe proved reserves in several venues. Solutions are being debated in the industry to arrive at a reporting mechanism that generates consistency and at the same time leads to useful parameters in assessing a company's value without compromising confidentiality. Copyright 2005 Offshore Technology Conference.

  1. Porous materials based on foaming solutions obtained from industrial waste

    NASA Astrophysics Data System (ADS)

    Starostina, I. V.; Antipova, A. N.; Ovcharova, I. V.; Starostina, Yu L.

    2018-03-01

    This study analyzes foam concrete production efficiency. Research has shown the possibility of using a newly-designed protein-based foaming agent to produce porous materials using gypsum and cement binders. The protein foaming agent is obtained by alkaline hydrolysis of a raw mixture consisting of industrial waste in an electromagnetic field. The mixture consists of spent biomass of the Aspergillus niger fungus and dust from burning furnaces used in cement production. Varying the content of the foaming agent allows obtaining gypsum binder-based foam concretes with the density of 200-500 kg/m3 and compressive strength of 0.1-1.0 MPa, which can be used for thermal and sound insulation of building interiors. Cement binders were used to obtain structural and thermal insulation materials with the density of 300-950 kg/m3 and compressive strength of 0.9-9.0 MPa. The maximum operating temperature of cement-based foam concretes is 500°C because it provides the shrinkage of less than 2%.

  2. Reducing uncertainty and increasing consistency: technical improvements to forest carbon pool estimation using the national forest inventory of the US

    Treesearch

    C.W. Woodall; G.M. Domke; J. Coulston; M.B. Russell; J.A. Smith; C.H. Perry; S.M. Ogle; S. Healey; A. Gray

    2015-01-01

    The FIA program does not directly measure forest C stocks. Instead, a combination of empirically derived C estimates (e.g., standing live and dead trees) and models (e.g., understory C stocks related to stand age and forest type) are used to estimate forest C stocks. A series of recent refinements in FIA estimation procedures have sought to reduce the uncertainty...

  3. Comparison of Brownian-dynamics-based estimates of polymer tension with direct force measurements.

    PubMed

    Arsenault, Mark E; Purohit, Prashant K; Goldman, Yale E; Shuman, Henry; Bau, Haim H

    2010-11-01

    With the aid of brownian dynamics models, it is possible to estimate polymer tension by monitoring polymers' transverse thermal fluctuations. To assess the precision of the approach, brownian dynamics-based tension estimates were compared with the force applied to rhodamine-phalloidin labeled actin filaments bound to polymer beads and suspended between two optical traps. The transverse thermal fluctuations of each filament were monitored with a CCD camera, and the images were analyzed to obtain the filament's transverse displacement variance as a function of position along the filament, the filament's tension, and the camera's exposure time. A linear Brownian dynamics model was used to estimate the filament's tension. The estimated force was compared and agreed within 30% (when the tension <0.1 pN ) and 70% (when the tension <1 pN ) with the applied trap force. In addition, the paper presents concise asymptotic expressions for the mechanical compliance of a system consisting of a filament attached tangentially to bead handles (dumbbell system). The techniques described here can be used for noncontact estimates of polymers' and fibers' tension.

  4. A self-consistency approach to improve microwave rainfall rate estimation from space

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Mack, Robert A.; Hakkarinen, Ida M.

    1989-01-01

    A multichannel statistical approach is used to retrieve rainfall rates from the brightness temperature T(B) observed by passive microwave radiometers flown on a high-altitude NASA aircraft. T(B) statistics are based upon data generated by a cloud radiative model. This model simulates variabilities in the underlying geophysical parameters of interest, and computes their associated T(B) in each of the available channels. By further imposing the requirement that the observed T(B) agree with the T(B) values corresponding to the retrieved parameters through the cloud radiative transfer model, the results can be made to agree quite well with coincident radar-derived rainfall rates. Some information regarding the cloud vertical structure is also obtained by such an added requirement. The applicability of this technique to satellite retrievals is also investigated. Data which might be observed by satellite-borne radiometers, including the effects of nonuniformly filled footprints, are simulated by the cloud radiative model for this purpose.

  5. The reliability and internal consistency of one-shot and flicker change detection for measuring individual differences in visual working memory capacity.

    PubMed

    Pailian, Hrag; Halberda, Justin

    2015-04-01

    We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.

  6. Contrasting academic and tobacco industry estimates of illicit cigarette trade: evidence from Warsaw, Poland.

    PubMed

    Stoklosa, Michal; Ross, Hana

    2014-05-01

    To compare two different methods for estimating the size of the illicit cigarette market with each other and to contrast the estimates obtained by these two methods with the results of an industry-commissioned study. We used two observational methods: collection of data from packs in smokers' personal possession, and collection of data from packs discarded on streets. The data were obtained in Warsaw, Poland in September 2011 and October 2011. We used tests of independence to compare the results based on the two methods, and to contrast those with the estimate from the industry-commissioned discarded pack collection conducted in September 2011. We found that the proportions of cigarette packs classified as not intended for the Polish market estimated by our two methods were not statistically different. These estimates were 14.6% (95% CI 10.8% to 19.4%) using the survey data (N=400) and 15.6% (95% CI 13.2% to 18.4%) using the discarded pack data (N=754). The industry estimate (22.9%) was higher by nearly a half compared with our estimates, and this difference is statistically significant. Our findings are consistent with previous evidence of the tobacco industry exaggerating the scope of illicit trade and with the general pattern of the industry manipulating evidence to mislead the debate on tobacco control policy in many countries. Collaboration between governments and the tobacco industry to estimate tobacco tax avoidance and evasion is likely to produce upward-biased estimates of illicit cigarette trade. If governments are presented with industry estimates, they should strictly require a disclosure of all methodological details and data used in generating these estimates, and should seek advice from independent experts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  8. Precision and accuracy of age estimates obtained from anal fin spines, dorsal fin spines, and sagittal otoliths for known-age largemouth bass

    USGS Publications Warehouse

    Klein, Zachary B.; Bonvechio, Timothy F.; Bowen, Bryant R.; Quist, Michael C.

    2017-01-01

    Sagittal otoliths are the preferred aging structure for Micropterus spp. (black basses) in North America because of the accurate and precise results produced. Typically, fisheries managers are hesitant to use lethal aging techniques (e.g., otoliths) to age rare species, trophy-size fish, or when sampling in small impoundments where populations are small. Therefore, we sought to evaluate the precision and accuracy of 2 non-lethal aging structures (i.e., anal fin spines, dorsal fin spines) in comparison to that of sagittal otoliths from known-age Micropterus salmoides (Largemouth Bass; n = 87) collected from the Ocmulgee Public Fishing Area, GA. Sagittal otoliths exhibited the highest concordance with true ages of all structures evaluated (coefficient of variation = 1.2; percent agreement = 91.9). Similarly, the low coefficient of variation (0.0) and high between-reader agreement (100%) indicate that age estimates obtained from sagittal otoliths were the most precise. Relatively high agreement between readers for anal fin spines (84%) and dorsal fin spines (81%) suggested the structures were relatively precise. However, age estimates from anal fin spines and dorsal fin spines exhibited low concordance with true ages. Although use of sagittal otoliths is a lethal technique, this method will likely remain the standard for aging Largemouth Bass and other similar black bass species.

  9. Believers' estimates of God's beliefs are more egocentric than estimates of other people's beliefs

    PubMed Central

    Epley, Nicholas; Converse, Benjamin A.; Delbosc, Alexa; Monteleone, George A.; Cacioppo, John T.

    2009-01-01

    People often reason egocentrically about others' beliefs, using their own beliefs as an inductive guide. Correlational, experimental, and neuroimaging evidence suggests that people may be even more egocentric when reasoning about a religious agent's beliefs (e.g., God). In both nationally representative and more local samples, people's own beliefs on important social and ethical issues were consistently correlated more strongly with estimates of God's beliefs than with estimates of other people's beliefs (Studies 1–4). Manipulating people's beliefs similarly influenced estimates of God's beliefs but did not as consistently influence estimates of other people's beliefs (Studies 5 and 6). A final neuroimaging study demonstrated a clear convergence in neural activity when reasoning about one's own beliefs and God's beliefs, but clear divergences when reasoning about another person's beliefs (Study 7). In particular, reasoning about God's beliefs activated areas associated with self-referential thinking more so than did reasoning about another person's beliefs. Believers commonly use inferences about God's beliefs as a moral compass, but that compass appears especially dependent on one's own existing beliefs. PMID:19955414

  10. Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Dai, Xiao-Xia; Feng, Yuan

    2015-12-01

    When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).

  11. Self-consistent phonon theory of the crystallization and elasticity of attractive hard spheres.

    PubMed

    Shin, Homin; Schweizer, Kenneth S

    2013-02-28

    We propose an Einstein-solid, self-consistent phonon theory for the crystal phase of hard spheres that interact via short-range attractions. The approach is first tested against the known behavior of hard spheres, and then applied to homogeneous particles that interact via short-range square well attractions and the Baxter adhesive hard sphere model. Given the crystal symmetry, packing fraction, and strength and range of attractive interactions, an effective harmonic potential experienced by a particle confined to its Wigner-Seitz cell and corresponding mean square vibrational amplitude are self-consistently calculated. The crystal free energy is then computed and, using separate information about the fluid phase free energy, phase diagrams constructed, including a first-order solid-solid phase transition and its associated critical point. The simple theory qualitatively captures all the many distinctive features of the phase diagram (critical and triple point, crystal-fluid re-entrancy, low-density coexistence curve) as a function of attraction range, and overall is in good semi-quantitative agreement with simulation. Knowledge of the particle localization length allows the crystal shear modulus to be estimated based on elementary ideas. Excellent predictions are obtained for the hard sphere crystal. Expanded and condensed face-centered cubic crystals are found to have qualitatively different elastic responses to varying attraction strength or temperature. As temperature increases, the expanded entropic solid stiffens, while the energy-controlled, fully-bonded dense solid softens.

  12. Self-consistent modeling of laminar electrohydrodynamic plumes from ultra-sharp needles in cyclohexane

    NASA Astrophysics Data System (ADS)

    Becerra, Marley; Frid, Henrik; Vázquez, Pedro A.

    2017-12-01

    This paper presents a self-consistent model of electrohydrodynamic (EHD) laminar plumes produced by electron injection from ultra-sharp needle tips in cyclohexane. Since the density of electrons injected into the liquid is well described by the Fowler-Nordheim field emission theory, the injection law is not assumed. Furthermore, the generation of electrons in cyclohexane and their conversion into negative ions is included in the analysis. Detailed steady-state characteristics of EHD plumes under weak injection and space-charge limited injection are studied. It is found that the plume characteristics far from both electrodes and under weak injection can be accurately described with an asymptotic simplified solution proposed by Vazquez et al. ["Dynamics of electrohydrodynamic laminar plumes: Scaling analysis and integral model," Phys. Fluids 12, 2809 (2000)] when the correct longitudinal electric field distribution and liquid velocity radial profile are used as input. However, this asymptotic solution deviates from the self-consistently calculated plume parameters under space-charge limited injection since it neglects the radial variations of the electric field produced by a high-density charged core. In addition, no significant differences in the model estimates of the plume are found when the simulations are obtained either with the finite element method or with a diffusion-free particle method. It is shown that the model also enables the calculation of the current-voltage characteristic of EHD laminar plumes produced by electron field emission, with good agreement with measured values reported in the literature.

  13. The Estimate of Atmospheric Boundary Layer Height Above a Coniferous Forest During BEARPEX 2007 and 2009

    NASA Astrophysics Data System (ADS)

    Choi, W.; McKay, M.; Weber, R.; Goldstein, A. H.; Baker, B. M.; Faloona, I. C.

    2009-12-01

    The atmospheric boundary layer (ABL) height (zi) is an extremely important parameter for interpreting field observations of reactive trace gases and understanding air quality at the local or regional scale. Despite its importance, zi is often crudely estimated for atmospheric chemistry or air pollution studies due to limited resources and the difficulty of measuring its altitude. In this study, zi over complex terrain (a coniferous forest in the California Sierra Nevada) is estimated based on the power spectra and the integral length scale of horizontal winds obtained from a three-axis sonic anemometer during the BEARPEX (Biosphere Effects on Aerosol and Photochemistry Experiment) 2007 and 2009. Estimated zi shows very good agreement with observations which were obtained from the balloon tether sonde (2007) and radio sonde (2009) measurements under unstable conditions (z/L<0). The behavior of zi under stable conditions (z/L>0), including the evolution and breakdown of the nocturnal boundary layer over the forest is also presented. Finally, significant directional wind shear was consistently observed during 2009 with winds backing from the prevailing surface west-southwesterlies (anabatic cross-valley circulation) to consistent southerlies just above the ABL. We show that this is the result of a thermal wind driven by the potential temperature gradient aligned upslope. The resultant wind flow pattern can modify the conventional model of transport along the Sacramento urban plume and has implications for California central valley basin flushing characteristics.

  14. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    PubMed

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  15. Paired comparison estimates of willingness to accept versus contingent valuation estimates of willingness to pay

    Treesearch

    John B. Loomis; George Peterson; Patricia A. Champ; Thomas C. Brown; Beatrice Lucero

    1998-01-01

    Estimating empirical measures of an individual's willingness to accept that are consistent with conventional economic theory, has proven difficult. The method of paired comparison offers a promising approach to estimate willingness to accept. This method involves having individuals make binary choices between receiving a particular good or a sum of money....

  16. Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network

    NASA Astrophysics Data System (ADS)

    Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea

    Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.

  17. Estimated burden of fungal infections in Germany.

    PubMed

    Ruhnke, Markus; Groll, Andreas H; Mayser, Peter; Ullmann, Andrew J; Mendling, Werner; Hof, Herbert; Denning, David W

    2015-10-01

    In the late 1980's, the incidence of invasive fungal diseases (IFDs) in Germany was estimated with 36.000 IFDs per year. The current number of fungal infections (FI) occurring each year in Germany is still not known. In the actual analysis, data on incidence of fungal infections in various patients groups at risk for FI were calculated and mostly estimated from various (mostly national) resources. According to the very heterogenous data resources robust data or statistics could not be obtained but preliminary estimations could be made and compared with data from other areas in the world using a deterministic model that has consistently been applied in many countries by the LIFE program ( www.LIFE-worldwide.org). In 2012, of the 80.52 million population (adults 64.47 million; 41.14 million female, 39.38 million male), 20% are children (0-14 years) and 16% of population are ≥65 years old. Using local data and literature estimates of the incidence or prevalence of fungal infections, about 9.6 million (12%) people in Germany suffer from a fungal infection each year. These figures are dominated (95%) by fungal skin disease and recurrent vulvo-vaginal candidosis. In general, considerable uncertainty surrounds the total numbers because IFDs do not belong to the list of reportable infectious diseases in Germany and most patients were not hospitalised because of the IFD but a distinct underlying disease. © 2015 Blackwell Verlag GmbH.

  18. Consistent and efficient processing of ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.

  19. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  20. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  1. The relative impact of baryons and cluster shape on weak lensing mass estimates of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Lee, B. E.; Le Brun, A. M. C.; Haq, M. E.; Deering, N. J.; King, L. J.; Applegate, D.; McCarthy, I. G.

    2018-05-01

    Weak gravitational lensing depends on the integrated mass along the line of sight. Baryons contribute to the mass distribution of galaxy clusters and the resulting mass estimates from lensing analysis. We use the cosmo-OWLS suite of hydrodynamic simulations to investigate the impact of baryonic processes on the bias and scatter of weak lensing mass estimates of clusters. These estimates are obtained by fitting NFW profiles to mock data using MCMC techniques. In particular, we examine the difference in estimates between dark matter-only runs and those including various prescriptions for baryonic physics. We find no significant difference in the mass bias when baryonic physics is included, though the overall mass estimates are suppressed when feedback from AGN is included. For lowest-mass systems for which a reliable mass can be obtained (M200 ≈ 2 × 1014M⊙), we find a bias of ≈-10 per cent. The magnitude of the bias tends to decrease for higher mass clusters, consistent with no bias for the most massive clusters which have masses comparable to those found in the CLASH and HFF samples. For the lowest mass clusters, the mass bias is particularly sensitive to the fit radii and the limits placed on the concentration prior, rendering reliable mass estimates difficult. The scatter in mass estimates between the dark matter-only and the various baryonic runs is less than between different projections of individual clusters, highlighting the importance of triaxiality.

  2. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  3. Gastropod shell size and architecture influence the applicability of methods used to estimate internal volume.

    PubMed

    Ragagnin, Marilia Nagata; Gorman, Daniel; McCarthy, Ian Donald; Sant'Anna, Bruno Sampaio; de Castro, Cláudio Campi; Turra, Alexander

    2018-01-11

    Obtaining accurate and reproducible estimates of internal shell volume is a vital requirement for studies into the ecology of a range of shell-occupying organisms, including hermit crabs. Shell internal volume is usually estimated by filling the shell cavity with water or sand, however, there has been no systematic assessment of the reliability of these methods and moreover no comparison with modern alternatives, e.g., computed tomography (CT). This study undertakes the first assessment of the measurement reproducibility of three contrasting approaches across a spectrum of shell architectures and sizes. While our results suggested a certain level of variability inherent for all methods, we conclude that a single measure using sand/water is likely to be sufficient for the majority of studies. However, care must be taken as precision may decline with increasing shell size and structural complexity. CT provided less variation between repeat measures but volume estimates were consistently lower compared to sand/water and will need methodological improvements before it can be used as an alternative. CT indicated volume may be also underestimated using sand/water due to the presence of air spaces visible in filled shells scanned by CT. Lastly, we encourage authors to clearly describe how volume estimates were obtained.

  4. MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning

    PubMed Central

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851

  5. MRI-based intelligence quotient (IQ) estimation with sparse learning.

    PubMed

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.

  6. A law of order estimation and leading-order terms for a family of averaged quantities on a multibaker chain system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Hideshi, E-mail: ishida@me.es.osaka-u.ac.jp

    2014-06-15

    In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. Thesemore » deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.« less

  7. FRAGS: estimation of coding sequence substitution rates from fragmentary data

    PubMed Central

    Swart, Estienne C; Hide, Winston A; Seoighe, Cathal

    2004-01-01

    Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802

  8. Outbursts and Gradualism: Megaflood erosion consistent with long-term landscape evolution

    NASA Astrophysics Data System (ADS)

    Garcia-Castellanos, Daniel; O'Connor, Jim

    2017-04-01

    Existing models for the development of topography and relief over geological timescales are fundamentally based on semi-empirical laws of the erosion and sediment transport performed by rivers. The prediction power of these laws is hindered by limitations in measuring river incision and by the scant knowledge of the past hydrological conditions, specifically average water flow and its variability. Consequently, models adopt 'gradualistic' (time-averaged) assumptions and the erodability values derived from modelling long-term erosion rates in rivers remain ambiguously tied not only to the lithology and nature of the bedrock but also to uncertainties in the quantification of past climate. This prevents the use of those erodabilities to predict the landscape evolution in different scenarios. Here, we apply the fundamentals of river erosion models to outburst floods triggered by overtopping lakes, for which the hydrograph is intrinsically known from the geomorphological record or from direct measures. We obtain the outlet erodability from the peak water discharge and lake area observed in 86 floods that span over 16 orders of magnitude in water volume. The obtained erodability-lithology correlation is consistent with that seen in 22 previous long-term river incision quantifications, showing that outburst floods can be used to estimate erodability values that remain valid for a wide range of hydrological regimes and for erosion timescales spanning from hours-long outburst floods to million-year-scale landscape evolution. The results constrain the conditions leading to the runaway erosion responsible for outburst floods triggered by overtopping lakes. They also call for the explicit incorporation of climate episodicity to the landscape evolution models. [Funded by CGL2014-59516].

  9. Excitations for Rapidly Estimating Flight-Control Parameters

    NASA Technical Reports Server (NTRS)

    Moes, Tim; Smith, Mark; Morelli, Gene

    2006-01-01

    A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the

  10. Comparing potential recharge estimates from three Land Surface Models across the Western US

    PubMed Central

    NIRAULA, REWATI; MEIXNER, THOMAS; AJAMI, HOORI; RODELL, MATTHEW; GOCHIS, DAVID; CASTRO, CHRISTOPHER L.

    2018-01-01

    Groundwater is a major source of water in the western US. However, there are limited recharge estimates available in this region due to the complexity of recharge processes and the challenge of direct observations. Land surface Models (LSMs) could be a valuable tool for estimating current recharge and projecting changes due to future climate change. In this study, simulations of three LSMs (Noah, Mosaic and VIC) obtained from the North American Land Data Assimilation System (NLDAS-2) are used to estimate potential recharge in the western US. Modeled recharge was compared with published recharge estimates for several aquifers in the region. Annual recharge to precipitation ratios across the study basins varied from 0.01–15% for Mosaic, 3.2–42% for Noah, and 6.7–31.8% for VIC simulations. Mosaic consistently underestimates recharge across all basins. Noah captures recharge reasonably well in wetter basins, but overestimates it in drier basins. VIC slightly overestimates recharge in drier basins and slightly underestimates it for wetter basins. While the average annual recharge values vary among the models, the models were consistent in identifying high and low recharge areas in the region. Models agree in seasonality of recharge occurring dominantly during the spring across the region. Overall, our results highlight that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating future recharge rates in data limited regions. PMID:29618845

  11. Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m

    NASA Astrophysics Data System (ADS)

    Czarnota, Karol; Gorbatov, Alexei

    2016-04-01

    In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single

  12. DEKFIS user's guide: Discrete Extended Kalman Filter/Smoother program for aircraft and rotorcraft data consistency

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.

  13. Estimation of the full-field dynamic response of a floating bridge using Kalman-type filtering algorithms

    NASA Astrophysics Data System (ADS)

    Petersen, Ø. W.; Øiseth, O.; Nord, T. S.; Lourens, E.

    2018-07-01

    Numerical predictions of the dynamic response of complex structures are often uncertain due to uncertainties inherited from the assumed load effects. Inverse methods can estimate the true dynamic response of a structure through system inversion, combining measured acceleration data with a system model. This article presents a case study of full-field dynamic response estimation of a long-span floating bridge: the Bergøysund Bridge in Norway. This bridge is instrumented with a network of 14 triaxial accelerometers. The system model consists of 27 vibration modes with natural frequencies below 2 Hz, obtained from a tuned finite element model that takes the fluid-structure interaction with the surrounding water into account. Two methods, a joint input-state estimation algorithm and a dual Kalman filter, are applied to estimate the full-field response of the bridge. The results demonstrate that the displacements and the accelerations can be estimated at unmeasured locations with reasonable accuracy when the wave loads are the dominant source of excitation.

  14. Consistent searches for SMEFT effects in non-resonant dijet events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alte, Stefan; Konig, Matthias; Shepherd, William

    Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less

  15. Consistent searches for SMEFT effects in non-resonant dijet events

    DOE PAGES

    Alte, Stefan; Konig, Matthias; Shepherd, William

    2018-01-19

    Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less

  16. Thermal states of neutron stars with a consistent model of interior

    NASA Astrophysics Data System (ADS)

    Fortin, M.; Taranto, G.; Burgio, G. F.; Haensel, P.; Schulze, H.-J.; Zdunik, J. L.

    2018-04-01

    We model the thermal states of both isolated neutron stars and accreting neutron stars in X-ray transients in quiescence and confront them with observations. We use an equation of state calculated using realistic two-body and three-body nucleon interactions, and superfluid nucleon gaps obtained using the same microscopic approach in the BCS approximation. Consistency with low-luminosity accreting neutron stars is obtained, as the direct Urca process is operating in neutron stars with mass larger than 1.1 M⊙ for the employed equation of state. In addition, proton superfluidity and sufficiently weak neutron superfluidity, obtained using a scaling factor for the gaps, are necessary to explain the cooling of middle-aged neutron stars and to obtain a realistic distribution of neutron star masses.

  17. 'Constraint consistency' at all orders in cosmological perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2015-08-01

    We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less

  18. Calibration of the inertial consistency index to assess road safety on horizontal curves of two-lane rural roads.

    PubMed

    Llopis-Castelló, David; Camacho-Torregrosa, Francisco Javier; García, Alfredo

    2018-05-26

    One of every four road fatalities occurs on horizontal curves of two-lane rural roads. To this regard, many studies have been undertaken to analyze the crash risk on this road element. Most of them were based on the concept of geometric design consistency, which can be defined as how drivers' expectancies and road behavior relate. However, none of these studies included a variable which represents and estimates drivers' expectancies. This research presents a new local consistency model based on the Inertial Consistency Index (ICI). This consistency parameter is defined as the difference between the inertial operating speed, which represents drivers' expectations, and the operating speed, which represents road behavior. The inertial operating speed was defined as the weighted average operating speed of the preceding road section. In this way, different lengths, periods of time, and weighting distributions were studied to identify how the inertial operating speed should be calculated. As a result, drivers' expectancies should be estimated considering 15 s along the segment and a linear weighting distribution. This was consistent with drivers' expectancies acquirement process, which is closely related to Short-Term Memory. A Safety Performance Function was proposed to predict the number of crashes on a horizontal curve and consistency thresholds were defined based on the ICI. To this regard, the crash rate increased as the ICI increased. Finally, the proposed consistency model was compared with previous models. As a conclusion, the new Inertial Consistency Index allowed a more accurate estimation of the number of crashes and a better assessment of the consistency level on horizontal curves. Therefore, highway engineers have a new tool to identify where road crashes are more likely to occur during the design stage of both new two-lane rural roads and improvements of existing highways. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Structural stability as a consistent predictor of phenological events.

    PubMed

    Song, Chuliang; Saavedra, Serguei

    2018-06-13

    The timing of the first and last seasonal appearance of a species in a community typically follows a pattern that is governed by temporal factors. While it has been shown that changes in the environment are linked to phenological changes, the direction of this link appears elusive and context-dependent. Thus, finding consistent predictors of phenological events is of central importance for a better assessment of expected changes in the temporal dynamics of ecological communities. Here we introduce a measure of structural stability derived from species interaction networks as an estimator of the expected range of environmental conditions compatible with the existence of a community. We test this measure as a predictor of changes in species richness recorded on a daily basis in a high-arctic plant-pollinator community during two spring seasons. We find that our measure of structural stability is the only consistent predictor of changes in species richness among different ecological and environmental variables. Our findings suggest that measures based on the notion of structural stability can synthesize the expected variation of environmental conditions tolerated by a community, and explain more consistently the phenological changes observed in ecological communities. © 2018 The Author(s).

  20. Distortion-product otoacoustic emission reflection-component delays and cochlear tuning: estimates from across the human lifespan.

    PubMed

    Abdala, Carolina; Guérit, François; Luo, Ping; Shera, Christopher A

    2014-04-01

    A consistent relationship between reflection-emission delay and cochlear tuning has been demonstrated in a variety of mammalian species, as predicted by filter theory and models of otoacoustic emission (OAE) generation. As a step toward the goal of studying cochlear tuning throughout the human lifespan, this paper exploits the relationship and explores two strategies for estimating delay trends-energy weighting and peak picking-both of which emphasize data at the peaks of the magnitude fine structure. Distortion product otoacoustic emissions (DPOAEs) at 2f1-f2 were recorded, and their reflection components were extracted in 184 subjects ranging in age from prematurely born neonates to elderly adults. DPOAEs were measured from 0.5-4 kHz in all age groups and extended to 8 kHz in young adults. Delay trends were effectively estimated using either energy weighting or peak picking, with the former method yielding slightly shorter delays and the latter somewhat smaller confidence intervals. Delay and tuning estimates from young adults roughly match those obtained from SFOAEs. Although the match is imperfect, reflection-component delays showed the expected bend (apical-basal transition) near 1 kHz, consistent with a break in cochlear scaling. Consistent with other measures of tuning, the term newborn group showed the longest delays and sharpest tuning over much of the frequency range.

  1. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  2. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  3. Dynamically consistent hydrography and absolute velocity in the eastern North Atlantic Ocean

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl

    1994-01-01

    The problem of mapping a dynamically consistent hydrographic field and associated absolute geostrophic flow in the eastern North Atlantic between 24 deg and 36 deg N is related directly to the solution of the so-called thermocline equations. A nonlinear optimization problem involving Needler's P equation is solved to find the hydrography and resulting flow that minimizes the vertical mixing above about 1500 m in the ocean and is simultaneously consistent with the observations. A sharp minimum (at least in some dimensions) is found, apparently corresponding to a solution nearly conserving potential vorticity and with vertical eddy coefficient less than about 10(exp -5) sq m/s. Estimates of `residual' quantities such as eddy coefficients are extremely sensitive to slight modifications to the observed fields. Boundary conditions, vertical velocities, etc., are a product of the optimization and produce estimates differing quantitatively from prior ones relying directly upon observed hydrography. The results are generally insensitive to particular elements of the solution methodology, but many questions remain concerning the extent to which different synoptic sections can be asserted to represent the same ocean. The method can be regarded as a practical generalization of the beta spiral and geostrophic balance inverses for the estimate of absolute geostrophic flows. Numerous improvements to the methodology used in this preliminary attempt are possible.

  4. Internal consistency of the self-reporting questionnaire-20 in occupational groups

    PubMed Central

    Santos, Kionna Oliveira Bernardes; Carvalho, Fernando Martins; de Araújo, Tânia Maria

    2016-01-01

    ABSTRACT OBJECTIVE To assess the internal consistency of the measurements of the Self-Reporting Questionnaire (SRQ-20) in different occupational groups. METHODS A validation study was conducted with data from four surveys with groups of workers, using similar methods. A total of 9,959 workers were studied. In all surveys, the common mental disorders were assessed via SRQ-20. The internal consistency considered the items belonging to dimensions extracted by tetrachoric factor analysis for each study. Item homogeneity assessment compared estimates of Cronbach’s alpha (KD-20), the alpha applied to a tetrachoric correlation matrix and stratified Cronbach’s alpha. RESULTS The SRQ-20 dimensions showed adequate values, considering the reference parameters. The internal consistency of the instrument items, assessed by stratified Cronbach’s alpha, was high (> 0.80) in the four studies. CONCLUSIONS The SRQ-20 showed good internal consistency in the professional categories evaluated. However, there is still a need for studies using alternative methods and additional information able to refine the accuracy of latent variable measurement instruments, as in the case of common mental disorders. PMID:27007682

  5. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  6. Window and Overlap Processing Effects on Power Estimates from Spectra

    NASA Astrophysics Data System (ADS)

    Trethewey, M. W.

    2000-03-01

    Fast Fourier transform (FFT) spectral processing is based on the assumption of stationary ergodic data. In engineering practice, the assumption is often violated and non-stationary data processed. Data windows are commonly used to reduce leakage by decreasing the signal amplitudes near the boundaries of the discrete samples. With certain combinations of non-stationary signals and windows, the temporal weighting may attenuate important signal characteristics to adversely affect any subsequent processing. In other words, the window artificially reduces a significant section of the time signal. Consequently, spectra and overall power estimated from the affected samples are unreliable. FFT processing can be particularly problematic when the signal consists of randomly occurring transients superimposed on a more continuous signal. Overlap processing is commonly used in this situation to improve the estimates. However, the results again depend on the temporal character of the signal in relation to the window weighting. A worst-case scenario, a short-duration half sine pulse, is used to illustrate the relationship between overlap percentage and resulting power estimates. The power estimates are shown to depend on the temporal behaviour of the square of overlapped window segments. An analysis shows that power estimates may be obtained to within 0.27 dB for the following windows and overlap combinations: rectangular (0% overlap), Hanning (62.5% overlap), Hamming (60.35% overlap) and flat-top (82.25% overlap).

  7. Completely automated estimation of prostate volume for 3-D side-fire transrectal ultrasound using shape prior approach

    NASA Astrophysics Data System (ADS)

    Li, Lu; Narayanan, Ramakrishnan; Miller, Steve; Shen, Feimo; Barqawi, Al B.; Crawford, E. David; Suri, Jasjit S.

    2008-02-01

    Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume effect and patient motion during image scans, which are all inherent in medical ultrasound imaging. The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour obtained. The volume of prostate is estimated with the segmentation results. Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of 7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The overall system, which was developed using Microsoft Visual C++, is real-time and accurate.

  8. Ultra-broadband ptychography with self-consistent coherence estimation from a high harmonic source

    NASA Astrophysics Data System (ADS)

    Odstrčil, M.; Baksh, P.; Kim, H.; Boden, S. A.; Brocklesby, W. S.; Frey, J. G.

    2015-09-01

    With the aim of improving imaging using table-top extreme ultraviolet sources, we demonstrate coherent diffraction imaging (CDI) with relative bandwidth of 20%. The coherence properties of the illumination probe are identified using the same imaging setup. The presented methods allows for the use of fewer monochromating optics, obtaining higher flux at the sample and thus reach higher resolution or shorter exposure time. This is important in the case of ptychography when a large number of diffraction patterns need to be collected. Our microscopy setup was tested on a reconstruction of an extended sample to show the quality of the reconstruction. We show that high harmonic generation based EUV tabletop microscope can provide reconstruction of samples with a large field of view and high resolution without additional prior knowledge about the sample or illumination.

  9. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey

    PubMed Central

    2013-01-01

    Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in

  10. An artificial network model for estimating the network structure underlying partially observed neuronal signals.

    PubMed

    Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun

    2014-01-01

    Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  11. An Evaluation of Residual Feed Intake Estimates Obtained with Computer Models Versus Empirical Regression

    USDA-ARS?s Scientific Manuscript database

    Data on individual daily feed intake, bi-weekly BW, and carcass composition were obtained on 1,212 crossbred steers, in Cycle VII of the Germplasm Evaluation Project at the U.S. Meat Animal Research Center. Within animal regressions of cumulative feed intake and BW on linear and quadratic days on fe...

  12. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    PubMed

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. Comparison of variance estimators for meta-analysis of instrumental variable estimates

    PubMed Central

    Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F

    2016-01-01

    Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262

  14. Constrained maximum consistency multi-path mitigation

    NASA Astrophysics Data System (ADS)

    Smith, George B.

    2003-10-01

    Blind deconvolution algorithms can be useful as pre-processors for signal classification algorithms in shallow water. These algorithms remove the distortion of the signal caused by multipath propagation when no knowledge of the environment is available. A framework in which filters that produce signal estimates from each data channel that are as consistent with each other as possible in a least-squares sense has been presented [Smith, J. Acoust. Soc. Am. 107 (2000)]. This framework provides a solution to the blind deconvolution problem. One implementation of this framework yields the cross-relation on which EVAM [Gurelli and Nikias, IEEE Trans. Signal Process. 43 (1995)] and Rietsch [Rietsch, Geophysics 62(6) (1997)] processing are based. In this presentation, partially blind implementations that have good noise stability properties are compared using Classification Operating Characteristics (CLOC) analysis. [Work supported by ONR under Program Element 62747N and NRL, Stennis Space Center, MS.

  15. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  16. Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey

    2015-12-01

    The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.

  17. Mean winds at the cloud top of Venus obtained from two-wavelength UV imaging by Akatsuki

    NASA Astrophysics Data System (ADS)

    Horinouchi, Takeshi; Kouyama, Toru; Lee, Yeon Joo; Murakami, Shin-ya; Ogohara, Kazunori; Takagi, Masahiro; Imamura, Takeshi; Nakajima, Kensuke; Peralta, Javier; Yamazaki, Atsushi; Yamada, Manabu; Watanabe, Shigeto

    2018-01-01

    Venus is covered with thick clouds. Ultraviolet (UV) images at 0.3-0.4 microns show detailed cloud features at the cloud-top level at about 70 km, which are created by an unknown UV-absorbing substance. Images acquired in this wavelength range have traditionally been used to measure winds at the cloud top. In this study, we report low-latitude winds obtained from the images taken by the UV imager, UVI, onboard the Akatsuki orbiter from December 2015 to March 2017. UVI provides images with two filters centered at 365 and 283 nm. While the 365-nm images enable continuation of traditional Venus observations, the 283-nm images visualize cloud features at an SO2 absorption band, which is novel. We used a sophisticated automated cloud-tracking method and thorough quality control to estimate winds with high precision. Horizontal winds obtained from the 283-nm images are generally similar to those from the 365-nm images, but in many cases, westward winds from the former are faster than the latter by a few m/s. From previous studies, one can argue that the 283-nm images likely reflect cloud features at higher altitude than the 365-nm images. If this is the case, the superrotation of the Venusian atmosphere generally increases with height at the cloud-top level, where it has been thought to roughly peak. The mean winds obtained from the 365-nm images exhibit local time dependence consistent with known tidal features. Mean zonal winds exhibit asymmetry with respect to the equator in the latter half of the analysis period, significantly at 365 nm and weakly at 283 nm. This contrast indicates that the relative altitude may vary with time and latitude, and so are the observed altitudes. In contrast, mean meridional winds do not exhibit much long-term variability. A previous study suggested that the geographic distribution of temporal mean zonal winds obtained from UV images from the Venus Express orbiter during 2006-2012 can be interpreted as forced by topographically induced

  18. Analysis of long term trends of precipitation estimates acquired using radar network in Turkey

    NASA Astrophysics Data System (ADS)

    Tugrul Yilmaz, M.; Yucel, Ismail; Kamil Yilmaz, Koray

    2016-04-01

    Precipitation estimates, a vital input in many hydrological and agricultural studies, can be obtained using many different platforms (ground station-, radar-, model-, satellite-based). Satellite- and model-based estimates are spatially continuous datasets, however they lack the high resolution information many applications often require. Station-based values are actual precipitation observations, however they suffer from their nature that they are point data. These datasets may be interpolated however such end-products may have large errors over remote locations with different climate/topography/etc than the areas stations are installed. Radars have the particular advantage of having high spatial resolution information over land even though accuracy of radar-based precipitation estimates depends on the Z-R relationship, mountain blockage, target distance from the radar, spurious echoes resulting from anomalous propagation of the radar beam, bright band contamination and ground clutter. A viable method to obtain spatially and temporally high resolution consistent precipitation information is merging radar and station data to take advantage of each retrieval platform. An optimally merged product is particularly important in Turkey where complex topography exerts strong controls on the precipitation regime and in turn hampers observation efforts. There are currently 10 (additional 7 are planned) weather radars over Turkey obtaining precipitation information since 2007. This study aims to optimally merge radar precipitation data with station based observations to introduce a station-radar blended precipitation product. This study was supported by TUBITAK fund # 114Y676.

  19. Information, Consistent Estimation and Dynamic System Identification.

    DTIC Science & Technology

    1976-11-01

    Washington,DC 232129 Tj-CUOSITORING AGENCY NAMIE 6 AOORESS(lI dittevmet Itroo CuooottaaII Offics) IS.- SECURITY CLASS. (of this *.part) SCHEDULE ’B...representative model from a given model set, applicable to infinite and even non-compact model sets. S-UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAOrj(*whe...ergodicity. For a thorough development of ergodic theory the reader is referred to, e.g., Doob [1953], Halmos [1956] and Chacon and Ornstein [1959

  20. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    NASA Astrophysics Data System (ADS)

    Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.

    2018-03-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.

  1. Estimating cell populations

    NASA Technical Reports Server (NTRS)

    White, B. S.; Castleman, K. R.

    1981-01-01

    An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.

  2. Are prescription drug insurance choices consistent with expected utility theory?

    PubMed

    Bundorf, M Kate; Mata, Rui; Schoenbaum, Michael; Bhattacharya, Jay

    2013-09-01

    To determine the extent to which people make choices inconsistent with expected utility theory when choosing among prescription drug insurance plans and whether tabular or graphical presentation format influences the consistency of their choices. Members of an Internet-enabled panel chose between two Medicare prescription drug plans. The "low variance" plan required higher out-of-pocket payments for the drugs respondents usually took but lower out-of-pocket payments for the drugs they might need if they developed a new health condition than the "high variance" plan. The probability of a change in health varied within subjects and the presentation format (text vs. graphical) and the affective salience of the clinical condition (abstract vs. risk related to specific clinical condition) varied between subjects. Respondents were classified based on whether they consistently chose either the low or high variance plan. Logistic regression models were estimated to examine the relationship between decision outcomes and task characteristics. The majority of respondents consistently chose either the low or high variance plan, consistent with expected utility theory. Half of respondents consistently chose the low variance plan. Respondents were less likely to make discrepant choices when information was presented in graphical format. Many people, although not all, make choices consistent with expected utility theory when they have information on differences among plans in the variance of out-of-pocket spending. Medicare beneficiaries would benefit from information on the extent to which prescription drug plans provide risk protection. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. Spring Small Grains Area Estimation

    NASA Technical Reports Server (NTRS)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  4. Information content of slug tests for estimating hydraulic properties in realistic, high-conductivity aquifer scenarios

    NASA Astrophysics Data System (ADS)

    Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya

    2011-06-01

    SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a

  5. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  6. An estimating equation approach to dimension reduction for longitudinal data

    PubMed Central

    Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li

    2016-01-01

    Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956

  7. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    PubMed

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  8. Estimates of plasma, packed cell and total blood volume in tissues of the rainbow trout (Salmo gairdneri)

    USGS Publications Warehouse

    Gingerich, W.H.; Pityer, R.A.; Rach, J.J.

    1987-01-01

    1. Total blood volume and relative blood volumes in selected tissues were determined in non-anesthetized, confined rainbow trout by using 51Cr-labelled trout erythrocytes as a vascular space marker.2. Mean total blood volume was estimated to be 4.09 ± 0.55 ml/100 g, or about 75% of that estimated with the commonly used plasma space marker Evans blue dye.3. Relative tissue blood volumes were greatest in highly perfused tissues such as kidney, gills, brain and liver and least in mosaic muscle.4. Estimates of tissue vascular spaces, made using radiolabelled erythrocytes, were only 25–50% of those based on plasma space markers.5. The consistently smaller vascular volumes obtained with labelled erythrocytes could be explained by assuming that commonly used plasma space markers diffuse from the vascular compartment.

  9. Estimation of fecundability from survey data.

    PubMed

    Goldman, N; Westoff, C F; Paul, L E

    1985-01-01

    The estimation of fecundability from survey data is plagued by methodological problems such as misreporting of dates of birth and marriage and the occurrence of premarital exposure to the risk of conception. Nevertheless, estimates of fecundability from World Fertility Survey data for women married in recent years appear to be plausible for most of the surveys analyzed here and are quite consistent with estimates reported in earlier studies. The estimates presented in this article are all derived from the first interval, the interval between marriage or consensual union and the first live birth conception.

  10. Iron insertion and hematite segregation on Fe-doped TiO2 nanoparticles obtained from sol-gel and hydrothermal methods.

    PubMed

    Santos, Reginaldo da S; Faria, Guilherme A; Giles, Carlos; Leite, Carlos A P; Barbosa, Herbert de S; Arruda, Marco A Z; Longo, Claudia

    2012-10-24

    Iron-doped TiO(2) (Fe:TiO(2)) nanoparticles were synthesized by the sol-gel method (with Fe/Ti molar ratio corresponding to 1, 3, and 5%), followed by hydrothermal treatment, drying, and annealing. A similar methodology was used to synthesize TiO(2) and α-Fe(2)O(3) nanoparticles. For comparison, a mixture hematite/titania, with Fe/Ti = 4% was also investigated. Characterization of the samples using Rietveld refinement of X-ray diffraction data revealed that TiO(2) consisted of 82% anatase and 18% brookite; for Fe:TiO(2), brookite increased to 30% and hematite was also identified (0.5, 1.0, and 1.2 wt % for samples prepared with 1, 3, and 5% of Fe/Ti). For hematite/titania mixture, Fe/Ti was estimated as 4.4%, indicating the Rietveld method reliability for estimation of phase composition. Because the band gap energy, estimated as 3.2 eV for TiO(2), gradually ranged from 3.0 to 2.7 eV with increasing Fe content at Fe:TiO(2), it can be assumed that a Fe fraction was also inserted as dopant in the TiO(2) lattice. Extended X-ray absorption fine structure spectra obtained for the Ti K-edge and Fe K-edge indicated that absorbing Fe occupied a Ti site in the TiO(2) lattice, but hematite features were not observed. Hematite particles also could not be identified in the images obtained by transmission electron microscopy, in spite of iron identification by elemental mapping, suggesting that hematite can be segregated at the grain boundaries of Fe:TiO(2).

  11. Teacher Code Switching Consistency and Precision in a Multilingual Mathematics Classroom

    ERIC Educational Resources Information Center

    Chikiwa, Clemence; Schäfer, Marc

    2016-01-01

    This paper reports on a study that investigated teacher code switching consistency and precision in multilingual secondary school mathematics classrooms in South Africa. Data was obtained through interviewing and observing five lessons of each of three mathematics teachers purposively selected from three township schools in the Eastern Cape…

  12. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  13. Estimation of Static Longitudinal Stability of Aircraft Configurations at High Mach Numbers and at Angles of Attack Between 0 deg and +/-180 deg

    NASA Technical Reports Server (NTRS)

    Dugan, Duane W.

    1959-01-01

    The possibility of obtaining useful estimates of the static longitudinal stability of aircraft flying at high supersonic Mach numbers at angles of attack between 0 and +/-180 deg is explored. Existing theories, empirical formulas, and graphical procedures are employed to estimate the normal-force and pitching-moment characteristics of an example airplane configuration consisting of an ogive-cylinder body, trapezoidal wing, and cruciform trapezoidal tail. Existing wind-tunnel data for this configuration at a Mach number of 6.86 provide an evaluation of the estimates up to an angle of attack of 35 deg. Evaluation at higher angles of attack is afforded by data obtained from wind-tunnel tests made with the same configuration at angles of attack between 30 and 150 deg at five Mach numbers between 2.5 and 3.55. Over the ranges of Mach numbers and angles of attack investigated, predictions of normal force and center-of-pressure locations for the configuration considered agree well with those obtained experimentally, particularly at the higher Mach numbers.

  14. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  15. Long-term consistency in spatial patterns of primate seed dispersal.

    PubMed

    Heymann, Eckhard W; Culot, Laurence; Knogge, Christoph; Noriega Piña, Tony Enrique; Tirado Herrera, Emérita R; Klapproth, Matthias; Zinner, Dietmar

    2017-03-01

    Seed dispersal is a key ecological process in tropical forests, with effects on various levels ranging from plant reproductive success to the carbon storage potential of tropical rainforests. On a local and landscape scale, spatial patterns of seed dispersal create the template for the recruitment process and thus influence the population dynamics of plant species. The strength of this influence will depend on the long-term consistency of spatial patterns of seed dispersal. We examined the long-term consistency of spatial patterns of seed dispersal with spatially explicit data on seed dispersal by two neotropical primate species, Leontocebus nigrifrons and Saguinus mystax (Callitrichidae), collected during four independent studies between 1994 and 2013. Using distributions of dispersal probability over distances independent of plant species, cumulative dispersal distances, and kernel density estimates, we show that spatial patterns of seed dispersal are highly consistent over time. For a specific plant species, the legume Parkia panurensis , the convergence of cumulative distributions at a distance of 300 m, and the high probability of dispersal within 100 m from source trees coincide with the dimension of the spatial-genetic structure on the embryo/juvenile (300 m) and adult stage (100 m), respectively, of this plant species. Our results are the first demonstration of long-term consistency of spatial patterns of seed dispersal created by tropical frugivores. Such consistency may translate into idiosyncratic patterns of regeneration.

  16. Consistent realization of Celestial and Terrestrial Reference Frames

    NASA Astrophysics Data System (ADS)

    Kwak, Younghee; Bloßfeld, Mathis; Schmid, Ralf; Angermann, Detlef; Gerstl, Michael; Seitz, Manuela

    2018-03-01

    The Celestial Reference System (CRS) is currently realized only by Very Long Baseline Interferometry (VLBI) because it is the space geodetic technique that enables observations in that frame. In contrast, the Terrestrial Reference System (TRS) is realized by means of the combination of four space geodetic techniques: Global Navigation Satellite System (GNSS), VLBI, Satellite Laser Ranging (SLR), and Doppler Orbitography and Radiopositioning Integrated by Satellite. The Earth orientation parameters (EOP) are the link between the two types of systems, CRS and TRS. The EOP series of the International Earth Rotation and Reference Systems Service were combined of specifically selected series from various analysis centers. Other EOP series were generated by a simultaneous estimation together with the TRF while the CRF was fixed. Those computation approaches entail inherent inconsistencies between TRF, EOP, and CRF, also because the input data sets are different. A combined normal equation (NEQ) system, which consists of all the parameters, i.e., TRF, EOP, and CRF, would overcome such an inconsistency. In this paper, we simultaneously estimate TRF, EOP, and CRF from an inter-technique combined NEQ using the latest GNSS, VLBI, and SLR data (2005-2015). The results show that the selection of local ties is most critical to the TRF. The combination of pole coordinates is beneficial for the CRF, whereas the combination of Δ UT1 results in clear rotations of the estimated CRF. However, the standard deviations of the EOP and the CRF improve by the inter-technique combination which indicates the benefits of a common estimation of all parameters. It became evident that the common determination of TRF, EOP, and CRF systematically influences future ICRF computations at the level of several μas. Moreover, the CRF is influenced by up to 50 μas if the station coordinates and EOP are dominated by the satellite techniques.

  17. State estimation improves prospects for ocean research

    NASA Astrophysics Data System (ADS)

    Stammer, Detlef; Wunsch, C.; Fukumori, I.; Marshall, J.

    Rigorous global ocean state estimation methods can now be used to produce dynamically consistent time-varying model/data syntheses, the results of which are being used to study a variety of important scientific problems. Figure 1 shows a schematic of a complete ocean observing and synthesis system that includes global observations and state-of-the-art ocean general circulation models (OGCM) run on modern computer platforms. A global observing system is described in detail in Smith and Koblinsky [2001],and the present status of ocean modeling and anticipated improvements are addressed by Griffies et al. [2001]. Here, the focus is on the third component of state estimation: the synthesis of the observations and a model into a unified, dynamically consistent estimate.

  18. Resilient Distributed Estimation Through Adversary Detection

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2018-05-01

    This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.

  19. Estimating surface hardening profile of blank for obtaining high drawing ratio in deep drawing process using FE analysis

    NASA Astrophysics Data System (ADS)

    Tan, C. J.; Aslian, A.; Honarvar, B.; Puborlaksono, J.; Yau, Y. H.; Chong, W. T.

    2015-12-01

    We constructed an FE axisymmetric model to simulate the effect of partially hardened blanks on increasing the limiting drawing ratio (LDR) of cylindrical cups. We partitioned an arc-shaped hard layer into the cross section of a DP590 blank. We assumed the mechanical property of the layer is equivalent to either DP980 or DP780. We verified the accuracy of the model by comparing the calculated LDR for DP590 with the one reported in the literature. The LDR for the partially hardened blank increased from 2.11 to 2.50 with a 1 mm depth of DP980 ring-shaped hard layer on the top surface of the blank. The position of the layer changed with drawing ratios. We proposed equations for estimating the inner and outer diameters of the layer, and tested its accuracy in the simulation. Although the outer diameters fitted in well with the estimated line, the inner diameters are slightly less than the estimated ones.

  20. Kinetic parameters of the GUINEVERE reference configuration in VENUS-F reactor obtained from a pile noise experiment using Rossi and Feynman methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geslot, Benoit; Pepino, Alexandra; Blaise, Patrick

    A pile noise measurement campaign has been conducted by the CEA in the VENUS-F reactor (SCK-CEN, Mol Belgium) in April 2011 in the reference critical configuration of the GUINEVERE experimental program. The experimental setup made it possible to estimate the core kinetic parameters: the prompt neutron decay constant, the delayed neutron fraction and the generation time. A precise assessment of these constants is of prime importance. In particular, the effective delayed neutron fraction is used to normalize and compare calculated reactivities of different subcritical configurations, obtained by modifying either the core layout or the control rods position, with experimental onesmore » deduced from the analysis of measurements. This paper presents results obtained with a CEA-developed time stamping acquisition system. Data were analyzed using Rossi-α and Feynman-α methods. Results were normalized to reactor power using a calibrated fission chamber with a deposit of Np-237. Calculated factors were necessary to the analysis: the Diven factor was computed by the ENEA (Italy) and the power calibration factor by the CNRS/IN2P3/LPC Caen. Results deduced with both methods are consistent with respect to calculated quantities. Recommended values are given by the Rossi-α estimator, that was found to be the most robust. The neutron generation time was found equal to 0.438 ± 0.009 μs and the effective delayed neutron fraction is 765 ± 8 pcm. Discrepancies with the calculated value (722 pcm, calculation from ENEA) are satisfactory: -5.6% for the Rossi-α estimate and -2.7% for the Feynman-α estimate. (authors)« less

  1. 20 CFR 404.810 - How to obtain a statement of earnings and a benefit estimate statement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records at the time of the request. If you have a social security number and have wages or net earnings... prescribed form, giving us your name, social security number, date of birth, and sex. You, your authorized... benefit estimate statement. 404.810 Section 404.810 Employees' Benefits SOCIAL SECURITY ADMINISTRATION...

  2. Estimation of time-dependent Hurst exponents with variational smoothing and application to forecasting foreign exchange rates

    NASA Astrophysics Data System (ADS)

    Garcin, Matthieu

    2017-10-01

    Hurst exponents depict the long memory of a time series. For human-dependent phenomena, as in finance, this feature may vary in the time. It justifies modelling dynamics by multifractional Brownian motions, which are consistent with time-dependent Hurst exponents. We improve the existing literature on estimating time-dependent Hurst exponents by proposing a smooth estimate obtained by variational calculus. This method is very general and not restricted to the sole Hurst framework. It is globally more accurate and easier than other existing non-parametric estimation techniques. Besides, in the field of Hurst exponents, it makes it possible to make forecasts based on the estimated multifractional Brownian motion. The application to high-frequency foreign exchange markets (GBP, CHF, SEK, USD, CAD, AUD, JPY, CNY and SGD, all against EUR) shows significantly good forecasts. When the Hurst exponent is higher than 0.5, what depicts a long-memory feature, the accuracy is higher.

  3. Comparing basal area growth models, consistency of parameters, and accuracy of prediction

    Treesearch

    J.J. Colbert; Michael Schuckers; Desta Fekedulegn

    2002-01-01

    We fit alternative sigmoid growth models to sample tree basal area historical data derived from increment cores and disks taken at breast height. We examine and compare the estimated parameters for these models across a range of sample sites. Models are rated on consistency of parameters and on their ability to fit growth data from four sites that are located across a...

  4. Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.

    PubMed

    Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo

    2017-09-01

    The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.

  5. Combining nutation and surface gravity observations to estimate the Earth's core and inner core resonant frequencies

    NASA Astrophysics Data System (ADS)

    Ziegler, Yann; Lambert, Sébastien; Rosat, Séverine; Nurul Huda, Ibnu; Bizouard, Christian

    2017-04-01

    Nutation time series derived from very long baseline interferometry (VLBI) and time varying surface gravity data recorded by superconducting gravimeters (SG) have long been used separately to assess the Earth's interior via the estimation of the free core and inner core resonance effects on nutation or tidal gravity. The results obtained from these two techniques have been shown recently to be consistent, making relevant the combination of VLBI and SG observables and the estimation of Earth's interior parameters in a single inversion. We present here the intermediate results of the ongoing project of combining nutation and surface gravity time series to improve estimates of the Earth's core and inner core resonant frequencies. We use VLBI nutation time series spanning 1984-2016 derived by the International VLBI Service for geodesy and astrometry (IVS) as the result of a combination of inputs from various IVS analysis centers, and surface gravity data from about 15 SG stations. We address here the resonance model used for describing the Earth's interior response to tidal excitation, the data preparation consisting of the error recalibration and amplitude fitting for nutation data, and processing of SG time-varying gravity to remove any gaps, spikes, steps and other disturbances, followed by the tidal analysis with the ETERNA 3.4 software package, the preliminary estimates of the resonant periods, and the correlations between parameters.

  6. Tropical forest plantation biomass estimation using RADARSAT-SAR and TM data of south china

    NASA Astrophysics Data System (ADS)

    Wang, Chenli; Niu, Zheng; Gu, Xiaoping; Guo, Zhixing; Cong, Pifu

    2005-10-01

    Forest biomass is one of the most important parameters for global carbon stock model yet can only be estimated with great uncertainties. Remote sensing, especially SAR data can offers the possibility of providing relatively accurate forest biomass estimations at a lower cost than inventory in study tropical forest. The goal of this research was to compare the sensitivity of forest biomass to Landsat TM and RADARSAT-SAR data and to assess the efficiency of NDVI, EVI and other vegetation indices in study forest biomass based on the field survey date and GIS in south china. Based on vegetation indices and factor analysis, multiple regression and neural networks were developed for biomass estimation for each species of the plantation. For each species, the better relationships between the biomass predicted and that measured from field survey was obtained with a neural network developed for the species. The relationship between predicted and measured biomass derived from vegetation indices differed between species. This study concludes that single band and many vegetation indices are weakly correlated with selected forest biomass. RADARSAT-SAR Backscatter coefficient has a relatively good logarithmic correlation with forest biomass, but neither TM spectral bands nor vegetation indices alone are sufficient to establish an efficient model for biomass estimation due to the saturation of bands and vegetation indices, multiple regression models that consist of spectral and environment variables improve biomass estimation performance. Comparing with TM, a relatively well estimation result can be achieved by RADARSAT-SAR, but all had limitations in tropical forest biomass estimation. The estimation results obtained are not accurate enough for forest management purposes at the forest stand level. However, the approximate volume estimates derived by the method can be useful in areas where no other forest information is available. Therefore, this paper provides a better

  7. Subtitle-Based Word Frequencies as the Best Estimate of Reading Behavior: The Case of Greek

    PubMed Central

    Dimitropoulou, Maria; Duñabeitia, Jon Andoni; Avilés, Alberto; Corral, José; Carreiras, Manuel

    2010-01-01

    Previous evidence has shown that word frequencies calculated from corpora based on film and television subtitles can readily account for reading performance, since the language used in subtitles greatly approximates everyday language. The present study examines this issue in a society with increased exposure to subtitle reading. We compiled SUBTLEX-GR, a subtitled-based corpus consisting of more than 27 million Modern Greek words, and tested to what extent subtitle-based frequency estimates and those taken from a written corpus of Modern Greek account for the lexical decision performance of young Greek adults who are exposed to subtitle reading on a daily basis. Results showed that SUBTLEX-GR frequency estimates effectively accounted for participants’ reading performance in two different visual word recognition experiments. More importantly, different analyses showed that frequencies estimated from a subtitle corpus explained the obtained results significantly better than traditional frequencies derived from written corpora. PMID:21833273

  8. Automatic estimation of retinal nerve fiber bundle orientation in SD-OCT images using a structure-oriented smoothing filter

    NASA Astrophysics Data System (ADS)

    Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.

    2017-02-01

    Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.

  9. Estimating the atmospheric boundary layer height over sloped, forested terrain from surface spectral analysis during BEARPEX

    NASA Astrophysics Data System (ADS)

    Choi, W.; Faloona, I. C.; McKay, M.; Goldstein, A. H.; Baker, B.

    2010-11-01

    In this study the atmospheric boundary layer (ABL) height (zi) over complex, forested terrain is estimated based on the power spectra and the integral length scale of horizontal winds obtained from a three-axis sonic anemometer during the BEARPEX (Biosphere Effects on Aerosol and Photochemistry) Experiment. The zi values estimated with this technique showed very good agreement with observations obtained from balloon tether sonde (2007) and rawinsonde (2009) measurements under unstable conditions (z/L < 0) at the coniferous forest in the California Sierra Nevada. The behavior of the nocturnal boundary layer height (h) and power spectra of lateral winds and temperature under stable conditions (z/L > 0) is also presented. The nocturnal boundary layer height is found to be fairly well predicted by a recent interpolation formula proposed by Zilitinkevich et al. (2007), although it was observed to only vary from 60-80 m during the experiment. Finally, significant directional wind shear was observed during both day and night with winds backing from the prevailing west-southwesterlies in the ABL (anabatic cross-valley circulation) to consistent southerlies in a layer ~1 km thick just above the ABL before veering to the prevailing westerlies further aloft. We show that this is consistent with the forcing of a thermal wind driven by the regional temperature gradient directed due east in the lower troposphere.

  10. Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities

    NASA Astrophysics Data System (ADS)

    Eyuboglu, B. Murat; Pilkington, Theo C.

    1993-08-01

    In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.

  11. Multiscale spatial and temporal estimation of the b-value

    NASA Astrophysics Data System (ADS)

    García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.

    2017-12-01

    The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.

  12. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    PubMed

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Evaluation and interpretation of Thematic Mapper ratios in equations for estimating corn growth parameters

    NASA Technical Reports Server (NTRS)

    Dardner, B. R.; Blad, B. L.; Thompson, D. R.; Henderson, K. E.

    1985-01-01

    Reflectance and agronomic Thematic Mapper (TM) data were analyzed to determine possible data transformations for evaluating several plant parameters of corn. Three transformation forms were used: the ratio of two TM bands, logarithms of two-band ratios, and normalized differences of two bands. Normalized differences and logarithms of two-band ratios responsed similarly in the equations for estimating the plant growth parameters evaluated in this study. Two-term equations were required to obtain the maximum predictability of percent ground cover, canopy moisture content, and total wet phytomass. Standard error of estimate values were 15-26 percent lower for two-term estimates of these parameters than for one-term estimates. The terms log(TM4/TM2) and (TM4/TM5) produced the maximum predictability for leaf area and dry green leaf weight, respectively. The middle infrared bands TM5 and TM7 are essential for maximizing predictability for all measured plant parameters except leaf area index. The estimating models were evaluated over bare soil to discriminate between equations which are statistically similar. Qualitative interpretations of the resulting prediction equations are consistent with general agronomic and remote sensing theory.

  14. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  15. Equivalent linearization for fatigue life estimates of a nonlinear structure

    NASA Technical Reports Server (NTRS)

    Miles, R. N.

    1989-01-01

    An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.

  16. Multiscale estimation of excess mass from gravity data

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  17. Simulation of fMRI signals to validate dynamic causal modeling estimation

    NASA Astrophysics Data System (ADS)

    Anandwala, Mobin; Siadat, Mohamad-Reza; Hadi, Shamil M.

    2012-03-01

    Through cognitive tasks certain brain areas are activated and also receive increased blood to them. This is modeled through a state system consisting of two separate parts one that deals with the neural node stimulation and the other blood response during that stimulation. The rationale behind using this state system is to validate existing analysis methods such as DCM to see what levels of noise they can handle. Using the forward Euler's method this system was approximated in a series of difference equations. What was obtained was the hemodynamic response for each brain area and this was used to test an analysis tool to estimate functional connectivity between each brain area with a given amount of noise. The importance of modeling this system is to not only have a model for neural response but also to compare to actual data obtained through functional imaging scans.

  18. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    USGS Publications Warehouse

    Boersen, Mark R.; Clark, Joseph D.; King, Tim L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  19. Investigation of Properties of Nanocomposite Polyimide Samples Obtained by Fused Deposition Modeling

    NASA Astrophysics Data System (ADS)

    Polyakov, I. V.; Vaganov, G. V.; Yudin, V. E.; Ivan'kova, E. M.; Popova, E. N.; Elokhovskii, V. Yu.

    2018-03-01

    Nanomodified polyimide samples were obtained by fused deposition modeling (FDM) using an experimental setup for 3D printing of highly heat-resistant plastics. The mechanical properties and structure of these samples were studied by viscosimetry, differential scanning calorimetry, and scanning electron microscopy. A comparative estimation of the mechanical properties of laboratory samples obtained from a nanocomposite based on heat-resistant polyetherimide by FDM and injection molding is presented.

  20. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  2. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  3. Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.

    PubMed

    Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L

    2018-06-03

    In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.

  4. VO2 estimation using 6-axis motion sensor with sports activity classification.

    PubMed

    Nagata, Takashi; Nakamura, Naoteru; Miyatake, Masato; Yuuki, Akira; Yomo, Hiroyuki; Kawabata, Takashi; Hara, Shinsuke

    2016-08-01

    In this paper, we focus on oxygen consumption (VO2) estimation using 6-axis motion sensor (3-axis accelerometer and 3-axis gyroscope) for people playing sports with diverse intensities. The VO2 estimated with a small motion sensor can be used to calculate the energy expenditure, however, its accuracy depends on the intensities of various types of activities. In order to achieve high accuracy over a wide range of intensities, we employ an estimation framework that first classifies activities with a simple machine-learning based classification algorithm. We prepare different coefficients of linear regression model for different types of activities, which are determined with training data obtained by experiments. The best-suited model is used for each type of activity when VO2 is estimated. The accuracy of the employed framework depends on the trade-off between the degradation due to classification errors and improvement brought by applying separate, optimum model to VO2 estimation. Taking this trade-off into account, we evaluate the accuracy of the employed estimation framework by using a set of experimental data consisting of VO2 and motion data of people with a wide range of intensities of exercises, which were measured by a VO2 meter and motion sensor, respectively. Our numerical results show that the employed framework can improve the estimation accuracy in comparison to a reference method that uses a common regression model for all types of activities.

  5. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  6. Temporal variability patterns in solar radiation estimations

    NASA Astrophysics Data System (ADS)

    Vindel, José M.; Navarro, Ana A.; Valenzuela, Rita X.; Zarzalejo, Luis F.

    2016-06-01

    In this work, solar radiation estimations obtained from a satellite and a numerical weather prediction model in mainland Spain have been compared. Similar comparisons have been formerly carried out, but in this case, the methodology used is different: the temporal variability of both sources of estimation has been compared with the annual evolution of the radiation associated to the different study climate zones. The methodology is based on obtaining behavior patterns, using a Principal Component Analysis, following the annual evolution of solar radiation estimations. Indeed, the adjustment degree to these patterns in each point (assessed from maps of correlation) may be associated with the annual radiation variation (assessed from the interquartile range), which is associated, in turn, to different climate zones. In addition, the goodness of each estimation source has been assessed comparing it with data obtained from the radiation measurements in ground by pyranometers. For the study, radiation data from Satellite Application Facilities and data corresponding to the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts have been used.

  7. Immediate estimation of correlation energy for molecular systems from the partial charges on atoms in the molecule

    NASA Astrophysics Data System (ADS)

    Kristyán, Sándor

    1997-11-01

    In the author's previous work (Chem. Phys. Lett. 247 (1995) 101 and Chem. Phys. Lett. 256 (1996) 229) a simple quasi-linear relationship was introduced between the number of electrons, N, participating in any molecular system and the correlation energy: -0.035 ( N - 1) > Ecorr[hartree] > - 0.045( N -1). This relationship was developed to estimate more accurately correlation energy immediately in ab initio calculations by using the partial charges of atoms in the molecule, easily obtained after Hartree-Fock self-consistent field (HF-SCF) calculations. The method is compared to the well-known B3LYP, MP2, CCSD and G2M methods. Correlation energy estimations for negatively (-1) charged atomic ions are also reported.

  8. Three-dimensional modeling, estimation, and fault diagnosis of spacecraft air contaminants.

    PubMed

    Narayan, A P; Ramirez, W F

    1998-01-01

    A description is given of the design and implementation of a method to track the presence of air contaminants aboard a spacecraft using an accurate physical model and of a procedure that would raise alarms when certain tolerance levels are exceeded. Because our objective is to monitor the contaminants in real time, we make use of a state estimation procedure that filters measurements from a sensor system and arrives at an optimal estimate of the state of the system. The model essentially consists of a convection-diffusion equation in three dimensions, solved implicitly using the principle of operator splitting, and uses a flowfield obtained by the solution of the Navier-Stokes equations for the cabin geometry, assuming steady-state conditions. A novel implicit Kalman filter has been used for fault detection, a procedure that is an efficient way to track the state of the system and that uses the sparse nature of the state transition matrices.

  9. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  10. Consistent prediction of GO protein localization.

    PubMed

    Spetale, Flavio E; Arce, Debora; Krsticevic, Flavia; Bulacio, Pilar; Tapia, Elizabeth

    2018-05-17

    The GO-Cellular Component (GO-CC) ontology provides a controlled vocabulary for the consistent description of the subcellular compartments or macromolecular complexes where proteins may act. Current machine learning-based methods used for the automated GO-CC annotation of proteins suffer from the inconsistency of individual GO-CC term predictions. Here, we present FGGA-CC + , a class of hierarchical graph-based classifiers for the consistent GO-CC annotation of protein coding genes at the subcellular compartment or macromolecular complex levels. Aiming to boost the accuracy of GO-CC predictions, we make use of the protein localization knowledge in the GO-Biological Process (GO-BP) annotations to boost the accuracy of GO-CC prediction. As a result, FGGA-CC + classifiers are built from annotation data in both the GO-CC and GO-BP ontologies. Due to their graph-based design, FGGA-CC + classifiers are fully interpretable and their predictions amenable to expert analysis. Promising results on protein annotation data from five model organisms were obtained. Additionally, successful validation results in the annotation of a challenging subset of tandem duplicated genes in the tomato non-model organism were accomplished. Overall, these results suggest that FGGA-CC + classifiers can indeed be useful for satisfying the huge demand of GO-CC annotation arising from ubiquitous high throughout sequencing and proteomic projects.

  11. Reconciling medical expenditure estimates from the MEPS and NHEA, 2007.

    PubMed

    Bernard, Didem; Cowan, Cathy; Selden, Thomas; Cai, Liming; Catlin, Aaron; Heffler, Stephen

    2012-01-01

    Provide a comparison of health care expenditure estimates for 2007 from the Medical Expenditure Panel Survey (MEPS) and the National Health Expenditure Accounts (NHEA). Reconciling these estimates serves two important purposes. First, it is an important quality assurance exercise for improving and ensuring the integrity of each source's estimates. Second, the reconciliation provides a consistent baseline of health expenditure data for policy simulations. Our results assist researchers to adjust MEPS to be consistent with the NHEA so that the projected costs as well as budgetary and tax implications of any policy change are consistent with national health spending estimates. The Medical Expenditure Panel Survey produced by the Agency for Healthcare Research and Quality, and the National Health Center for Health Statistics and the National Health Expenditures produced by the Centers for Medicare & Medicaid Service's Office of the Actuary. In this study, we focus on the personal health care (PHC) sector, which includes the goods and services rendered to treat or prevent a specific disease or condition in an individual. The official 2007 NHEA estimate for PHC spending is $1,915 billion and the MEPS estimate is $1,126 billion. Adjusting the NHEA estimates for differences in underlying populations, covered services, and other measurement concepts reduces the NHEA estimate for 2007 to $1,366 billion. As a result, MEPS is $240 billion, or 17.6 percent, less than the adjusted NHEA total.

  12. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  13. Consistency of the tachyon warm inflationary universe models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiao-Min; Zhu, Jian-Yang, E-mail: zhangxm@mail.bnu.edu.cn, E-mail: zhujy@bnu.edu.cn

    2014-02-01

    This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ{sub 0} and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε{sub H}, and the conditionmore » is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ{sub 0}) is usually not a suitable assumption for a warm inflationary model.« less

  14. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  15. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  16. Material Total Mass Loss in Vacuum Obtained From Various Outgassing Systems

    NASA Technical Reports Server (NTRS)

    Scialdone, John; Isaac, Peggy; Clatterbuck, Carroll; Hunkeler, Ronald

    2000-01-01

    Several instruments including the Cahn Microbalance, the Knudsen Cell, the micro-CVCM, and the vacuum Thermogravimetric Analyzer (TGA) were used in the testing of a graphite/epoxy (GR/EP) composite that is proposed for use as a rigidizing element of an inflatable deployment system. This GR/EP will be cured in situ. The purpose of this testing is to estimate the gaseous production resulting from the curing of the GR/EP composite, to predict the resulting pressure, and to calculate the required venting. Every test was conducted under vacuum at 125 degrees C for 24 hours. Upon comparison of the results, the ASTM E-595 was noted to have given readings that were consistently lower than those obtained using the other instruments, which otherwise provided similar results. The GR/EP was tested using several different geometric arrangements. This paper describes the analysis evaluating the molecular and continuum flow of the outgassing products issuing from the exit port of the ASTM E-595 system. The effective flow conductance provided by the physical dimensions of the vent passage of the ASTM E-595 system and that of the material sample among other factors were investigated to explain the reduced amount of outgassing released during the 24-hour test period.

  17. SURE Estimates for a Heteroscedastic Hierarchical Model

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.

    2014-01-01

    Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976

  18. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  19. Optimization of Photospheric Electric Field Estimates for Accurate Retrieval of Total Magnetic Energy Injection

    NASA Astrophysics Data System (ADS)

    Lumme, E.; Pomoell, J.; Kilpua, E. K. J.

    2017-12-01

    Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.

  20. Estimating corresponding locations in ipsilateral breast tomosynthesis views

    NASA Astrophysics Data System (ADS)

    van Schie, Guido; Tanner, Christine; Karssemeijer, Nico

    2011-03-01

    To improve cancer detection in mammography, breast exams usually consist of two views per breast. To combine information from both views, radiologists and multiview computer-aided detection (CAD) systems need to match corresponding regions in the two views. In digital breast tomosynthesis (DBT), finding corresponding regions in ipsilateral volumes may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. In this study we developed a method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a mathematical transformation. First a compressed breast model is matched to the tomosynthesis view containing a point of interest. Then we decompress, rotate and compress again to estimate the location of the corresponding point in the ipsilateral view. In this study we use a simple elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. The model is matched to the volume by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation we annotated 181 landmarks in both views and applied our method to each location. Results show a median 3D distance between the actual location and estimated location of 1.5 cm; a good starting point for a feature based local search method to link lesions for a multiview CAD system. Half of the estimated locations were at most 1 slice away from the actual location, making our method useful as a tool in mammographic workstations to interactively find corresponding locations in ipsilateral tomosynthesis views.

  1. Self-consistent electrostatic potential due to trapped plasma in the magnetosphere

    NASA Technical Reports Server (NTRS)

    Miller, Ronald H.; Khazanov, George V.

    1993-01-01

    A steady state solution for the self-consistent electrostatic potential due to a plasma confined in a magnetic flux tube is considered. A steady state distribution function is constructed for the trapped particles from the constants of the motion, in the absence of waves and collisions. Using Liouville's theorem, the particle density along the geomagnetic field is determined and found to depend on the local magnetic field, self-consistent electric potential, and the equatorial plasma distribution function. A hot anisotropic magnetospheric plasma in steady state is modeled by a bi-Maxwellian at the equator. The self-consistent electric potential along the magnetic field is calculated assuming quasineutrality, and the potential drop is found to be approximately equal to the average kinetic energy of the equatorially trapped plasma. The potential is compared with that obtained by Alfven and Faelthammar (1963).

  2. Estimating chronic hepatitis C prognosis using transient elastography-based liver stiffness: A systematic review and meta-analysis.

    PubMed

    Erman, A; Sathya, A; Nam, A; Bielecki, J M; Feld, J J; Thein, H-H; Wong, W W L; Grootendorst, P; Krahn, M D

    2018-05-01

    Chronic hepatitis C (CHC) is a leading cause of hepatic fibrosis and cirrhosis. The level of fibrosis is traditionally established by histology, and prognosis is estimated using fibrosis progression rates (FPRs; annual probability of progressing across histological stages). However, newer noninvasive alternatives are quickly replacing biopsy. One alternative, transient elastography (TE), quantifies fibrosis by measuring liver stiffness (LSM). Given these developments, the purpose of this study was (i) to estimate prognosis in treatment-naïve CHC patients using TE-based liver stiffness progression rates (LSPR) as an alternative to FPRs and (ii) to compare consistency between LSPRs and FPRs. A systematic literature search was performed using multiple databases (January 1990 to February 2016). LSPRs were calculated using either a direct method (given the difference in serial LSMs and time elapsed) or an indirect method given a single LSM and the estimated duration of infection and pooled using random-effects meta-analyses. For validation purposes, FPRs were also estimated. Heterogeneity was explored by random-effects meta-regression. Twenty-seven studies reporting on 39 groups of patients (N = 5874) were identified with 35 groups allowing for indirect and 8 for direct estimation of LSPR. The majority (~58%) of patients were HIV/HCV-coinfected. The estimated time-to-cirrhosis based on TE vs biopsy was 39 and 38 years, respectively. In univariate meta-regressions, male sex and HIV were positively and age at assessment, negatively associated with LSPRs. Noninvasive prognosis of HCV is consistent with FPRs in predicting time-to-cirrhosis, but more longitudinal studies of liver stiffness are needed to obtain refined estimates. © 2017 John Wiley & Sons Ltd.

  3. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  4. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.

    PubMed

    Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew

    2012-08-08

    Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  5. Estimation and mapping of uranium content of geological units in France.

    PubMed

    Ielsch, G; Cuney, M; Buscail, F; Rossi, F; Leon, A; Cushing, M E

    2017-01-01

    In France, natural radiation accounts for most of the population exposure to ionizing radiation. The Institute for Radiological Protection and Nuclear Safety (IRSN) carries out studies to evaluate the variability of natural radioactivity over the French territory. In this framework, the present study consisted in the evaluation of uranium concentrations in bedrocks. The objective was to provide estimate of uranium content of each geological unit defined in the geological map of France (1:1,000,000). The methodology was based on the interpretation of existing geochemical data (results of whole rock sample analysis) and the knowledge of petrology and lithology of the geological units, which allowed obtaining a first estimate of the uranium content of rocks. Then, this first estimate was improved thanks to some additional information. For example, some particular or regional sedimentary rocks which could present uranium contents higher than those generally observed for these lithologies, were identified. Moreover, databases on mining provided information on the location of uranium and coal/lignite mines and thus indicated the location of particular uranium-rich rocks. The geological units, defined from their boundaries extracted from the geological map of France (1:1,000,000), were finally classified into 5 categories based on their mean uranium content. The map obtained provided useful data for establishing the geogenic radon map of France, but also for mapping countrywide exposure to terrestrial radiation and for the evaluation of background levels of natural radioactivity used for impact assessment of anthropogenic activities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Status and Opportunities for Improving the Consistency of Technical Reference Manuals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayaweera, Tina; Velonis, Aquila; Haeri, Hossein

    Across the United States, energy-efficiency program administrators rely on Technical Reference Manuals (TRMs) as sources for calculations and deemed savings values for specific, well-defined efficiency measures. TRMs play an important part in energy efficiency program planning by providing a common and consistent source for calculation of ex ante and often ex post savings. They thus help reduce energy-efficiency resource acquisition costs by obviating the need for extensive measurement and verification and lower performance risk for program administrators and implementation contractors. This paper considers the benefits of establishing region-wide or national TRMs and considers the challenges of such undertaking due tomore » the difficulties in comparing energy savings across jurisdictions. We argue that greater consistency across TRMs in the approaches used to determine deemed savings values, with more transparency about assumptions, would allow better comparisons in savings estimates across jurisdictions as well as improve confidence in reported efficiency measure savings. To support this thesis, we review approaches for the calculation of savings for select measures in TRMs currently in use in 17 jurisdictions. The review reveals differences in the saving methodologies, technical assumptions, and input variables used for estimating deemed savings values. These differences are described and their implications are summarized, using four, common energy-efficiency measures as examples. Recommendations are then offered for establishing a uniform approach for determining deemed savings values.« less

  7. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  8. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.

  9. Robust fundamental frequency estimation in sustained vowels: Detailed algorithmic comparisons and information fusion with adaptive Kalman filtering

    PubMed Central

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.

    2014-01-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269

  10. Learning to Obtain Reward, but Not Avoid Punishment, Is Affected by Presence of PTSD Symptoms in Male Veterans: Empirical Data and Computational Model

    PubMed Central

    Myers, Catherine E.; Moustafa, Ahmed A.; Sheynin, Jony; VanMeenen, Kirsten M.; Gilbertson, Mark W.; Orr, Scott P.; Beck, Kevin D.; Pang, Kevin C. H.; Servatius, Richard J.

    2013-01-01

    Post-traumatic stress disorder (PTSD) symptoms include behavioral avoidance which is acquired and tends to increase with time. This avoidance may represent a general learning bias; indeed, individuals with PTSD are often faster than controls on acquiring conditioned responses based on physiologically-aversive feedback. However, it is not clear whether this learning bias extends to cognitive feedback, or to learning from both reward and punishment. Here, male veterans with self-reported current, severe PTSD symptoms (PTSS group) or with few or no PTSD symptoms (control group) completed a probabilistic classification task that included both reward-based and punishment-based trials, where feedback could take the form of reward, punishment, or an ambiguous “no-feedback” outcome that could signal either successful avoidance of punishment or failure to obtain reward. The PTSS group outperformed the control group in total points obtained; the PTSS group specifically performed better than the control group on reward-based trials, with no difference on punishment-based trials. To better understand possible mechanisms underlying observed performance, we used a reinforcement learning model of the task, and applied maximum likelihood estimation techniques to derive estimated parameters describing individual participants’ behavior. Estimations of the reinforcement value of the no-feedback outcome were significantly greater in the control group than the PTSS group, suggesting that the control group was more likely to value this outcome as positively reinforcing (i.e., signaling successful avoidance of punishment). This is consistent with the control group’s generally poorer performance on reward trials, where reward feedback was to be obtained in preference to the no-feedback outcome. Differences in the interpretation of ambiguous feedback may contribute to the facilitated reinforcement learning often observed in PTSD patients, and may in turn provide new insight into

  11. Overcoming bias in estimating the volume-outcome relationship.

    PubMed

    Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D

    2006-02-01

    To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.

  12. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  13. Estimate of procession and polar motion errors from planetary encounter station location solutions

    NASA Technical Reports Server (NTRS)

    Pease, G. E.

    1978-01-01

    Jet Propulsion Laboratory Deep Space Station (DSS) location solutions based on two JPL planetary ephemerides, DE 84 and DE 96, at eight planetary encounters were used to obtain weighted least squares estimates of precession and polar motion errors. The solution for precession error in right ascension yields a value of 0.3 X 10 to the minus 5 power plus or minus 0.8 X 10 to the minus 6 power deg/year. This maps to a right ascension error of 1.3 X 10 to the minus 5 power plus or minus 0.4 X 10 to the minus 5 power deg at the first Voyager 1979 Jupiter encounter if the current JPL DSS location set is used. Solutions for precession and polar motion using station locations based on DE 84 agree well with the solution using station locations referenced to DE 96. The precession solution removes the apparent drift in station longitude and spin axis distance estimates, while the encounter polar motion solutions consistently decrease the scatter in station spin axis distance estimates.

  14. Estimating the hydraulic parameters of a confined aquifer based on the response of groundwater levels to seismic Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Sun, Xiaolong; Xiang, Yang; Shi, Zheming

    2018-05-01

    Groundwater flow models implemented to manage regional water resources require aquifer hydraulic parameters. Traditional methods for obtaining these parameters include laboratory experiments, field tests and model inversions, and each are potentially hindered by their unique limitations. Here, we propose a methodology for estimating hydraulic conductivity and storage coefficients using the spectral characteristics of the coseismic groundwater-level oscillations and seismic Rayleigh waves. The results from Well X10 are consistent with the variations and spectral characteristics of the water-level oscillations and seismic waves and present an estimated hydraulic conductivity of approximately 1 × 10-3 m s-1 and storativity of 15 × 10-6. The proposed methodology for estimating hydraulic parameters in confined aquifers is a practical and novel approach for groundwater management and seismic precursor anomaly analyses.

  15. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  16. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    PubMed

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  17. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  18. A hierarchical estimator development for estimation of tire-road friction coefficient.

    PubMed

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  19. A hierarchical estimator development for estimation of tire-road friction coefficient

    PubMed Central

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332

  20. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case

  1. Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-08-01

    The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

  2. Plasma parameters of the cathode spot explosive electron emission cell obtained from the model of liquid-metal jet tearing and electrical explosion

    NASA Astrophysics Data System (ADS)

    Tsventoukh, M. M.

    2018-05-01

    A model has been developed for the explosive electron emission cell pulse of a vacuum discharge cathode spot that describes the ignition and extinction of the explosive pulse. The pulse is initiated due to hydrodynamic tearing of a liquid-metal jet which propagates from the preceding cell crater boundary and draws the ion current from the plasma produced by the preceding explosion. Once the jet neck has been resistively heated to a critical temperature (˜1 eV), the plasma starts expanding and decreasing in density, which corresponds to the extinction phase. Numerical and analytical solutions have been obtained that describe both the time behavior of the pulse plasma parameters and their average values. For the cell plasma, the momentum per transferred charge has been estimated to be some tens of g cm/(s C), which is consistent with the known measurements of ion velocity, ion erosion rate, and specific recoil force. This supports the model of the pressure-gradient-driven plasma acceleration mechanism for the explosive cathode spot cells. The ohmic electric field within the explosive current-carrying plasma has been estimated to be some tens of kV/cm, which is consistent with the known experimental data on cathode potential fall and explosive cell plasma size. This supports the model that assumes the ohmic nature of the cathode potential fall in a vacuum discharge.

  3. Estimating Canopy Dark Respiration for Crop Models

    NASA Technical Reports Server (NTRS)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  4. Calibrated Tully-fisher Relations For Improved Photometric Estimates Of Disk Rotation Velocities

    NASA Astrophysics Data System (ADS)

    Reyes, Reinabelle; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.

    2011-01-01

    We present calibrated scaling relations (also referred to as Tully-Fisher relations or TFRs) between rotation velocity and photometric quantities-- absolute magnitude, stellar mass, and synthetic magnitude (a linear combination of absolute magnitude and color)-- of disk galaxies at z 0.1. First, we selected a parent disk sample of 170,000 galaxies from SDSS DR7, with redshifts between 0.02 and 0.10 and r band absolute magnitudes between -18.0 and -22.5. Then, we constructed a child disk sample of 189 galaxies that span the parameter space-- in absolute magnitude, color, and disk size-- covered by the parent sample, and for which we have obtained kinematic data. Long-slit spectroscopy were obtained from the Dual Imaging Spectrograph (DIS) at the Apache Point Observatory 3.5 m for 99 galaxies, and from Pizagno et al. (2007) for 95 galaxies (five have repeat observations). We find the best photometric estimator of disk rotation velocity to be a synthetic magnitude with a color correction that is consistent with the Bell et al. (2003) color-based stellar mass ratio. The improved rotation velocity estimates have a wide range of scientific applications, and in particular, in combination with weak lensing measurements, they enable us to constrain the ratio of optical-to-virial velocity in disk galaxies.

  5. Application of a tri-axial accelerometer to estimate jump frequency in volleyball.

    PubMed

    Jarning, Jon M; Mok, Kam-Ming; Hansen, Bjørge H; Bahr, Roald

    2015-03-01

    Patellar tendinopathy is prevalent among athletes, and most likely associated with a high jumping load. If methods for estimating jump frequency were available, this could potentially assist in understanding and preventing this condition. The objective of this study was to explore the possibility of using peak vertical acceleration (PVA) or peak resultant acceleration (PRA) measured by an accelerometer to estimate jump frequency. Twelve male elite volleyball players (22.5 ± 1.6 yrs) performed a training protocol consisting of seven typical motion patterns, including jumping and non-jumping movements. Accelerometer data from the trial were obtained using a tri-axial accelerometer. In addition, we collected video data from the trial. Jump-float serving and spike jumping could not be distinguished from non-jumping movements using differences in PVA or PRA. Furthermore, there were substantial inter-participant differences in both the PVA and the PRA within and across movement types (p < 0.05). These findings suggest that neither PVA nor PRA measured by a tri-axial accelerometer is an applicable method for estimating jump frequency in volleyball. A method for acquiring real-time estimates of jump frequency remains to be verified. However, there are several alternative approaches, and further investigations are needed.

  6. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  7. Performance Analysis of Blind Subspace-Based Signature Estimation Algorithms for DS-CDMA Systems with Unknown Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zarifi, Keyvan; Gershman, Alex B.

    2006-12-01

    We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.

  8. Online estimation of room reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.

    2003-04-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  9. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  10. Full-data Results of Hubble Frontier Fields: UV Luminosity Functions at z ∼ 6–10 and a Consistent Picture of Cosmic Reionization

    NASA Astrophysics Data System (ADS)

    Ishigaki, Masafumi; Kawamata, Ryota; Ouchi, Masami; Oguri, Masamune; Shimasaku, Kazuhiro; Ono, Yoshiaki

    2018-02-01

    We present UV luminosity functions of dropout galaxies at z∼ 6{--}10 with the complete Hubble Frontier Fields data. We obtain a catalog of ∼450 dropout-galaxy candidates (350, 66, and 40 at z∼ 6{--}7, 8, and 9, respectively), with UV absolute magnitudes that reach ∼ -14 mag, ∼2 mag deeper than the Hubble Ultra Deep Field detection limits. We carefully evaluate number densities of the dropout galaxies by Monte Carlo simulations, including all lensing effects such as magnification, distortion, and multiplication of images as well as detection completeness and contamination effects in a self-consistent manner. We find that UV luminosity functions at z∼ 6{--}8 have steep faint-end slopes, α ∼ -2, and likely steeper slopes, α ≲ -2 at z∼ 9{--}10. We also find that the evolution of UV luminosity densities shows a non-accelerated decline beyond z∼ 8 in the case of {M}trunc}=-15, but an accelerated one in the case of {M}trunc}=-17. We examine whether our results are consistent with the Thomson scattering optical depth from the Planck satellite and the ionized hydrogen fraction Q H II at z≲ 7 based on the standard analytic reionization model. We find that reionization scenarios exist that consistently explain all of the observational measurements with the allowed parameters of {f}esc}={0.17}-0.03+0.07 and {M}trunc}> -14.0 for {log}{ξ }ion}/[{erg}}-1 {Hz}]=25.34, where {f}esc} is the escape fraction, M trunc is the faint limit of the UV luminosity function, and {ξ }ion} is the conversion factor of the UV luminosity to the ionizing photon emission rate. The length of the reionization period is estimated to be {{Δ }}z={3.9}-1.6+2.0 (for 0.1< {Q}{{H}{{II}}}< 0.99), consistent with the recent estimate from Planck.

  11. Towards a sampling strategy for the assessment of forest condition at European level: combining country estimates.

    PubMed

    Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo

    2013-04-01

    A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.

  12. The estimation of material and patch parameters in a PDE-based circular plate model

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.

    1995-01-01

    The estimation of material and patch parameters for a system involving a circular plate, to which piezoceramic patches are bonded, is considered. A partial differential equation (PDE) model for the thin circular plate is used with the passive and active contributions form the patches included in the internal and external bending moments. This model contains piecewise constant parameters describing the density, flexural rigidity, Poisson ratio, and Kelvin-Voigt damping for the system as well as patch constants and a coefficient for viscous air damping. Examples demonstrating the estimation of these parameters with experimental acceleration data and a variety of inputs to the experimental plate are presented. By using a physically-derived PDE model to describe the system, parameter sets consistent across experiments are obtained, even when phenomena such as damping due to electric circuits affect the system dynamics.

  13. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  14. Quantifying the Accuracy of Digital Hemispherical Photography for Leaf Area Index Estimates on Broad-Leaved Tree Species.

    PubMed

    Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto

    2018-03-29

    Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user's experience and sensibility. The purpose of this study was to quantify the impact of user's subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t -test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies.

  15. Quantifying the Accuracy of Digital Hemispherical Photography for Leaf Area Index Estimates on Broad-Leaved Tree Species

    PubMed Central

    Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto

    2018-01-01

    Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user’s experience and sensibility. The purpose of this study was to quantify the impact of user’s subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t-test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies. PMID:29596376

  16. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  17. No correlation between ultrasound placental grading at 31-34 weeks of gestation and a surrogate estimate of organ function at term obtained by stereological analysis.

    PubMed

    Yin, T T; Loughna, P; Ong, S S; Padfield, J; Mayhew, T M

    2009-08-01

    We test the experimental hypothesis that early changes in the ultrasound appearance of the placenta reflect poor or reduced placental function. The sonographic (Grannum) grade of placental maturity was compared to placental function as expressed by the morphometric oxygen diffusive conductance of the villous membrane. Ultrasonography was used to assess the Grannum grade of 32 placentas at 31-34 weeks of gestation. Indications for the scans included a history of previous fetal abnormalities, previous fetal growth problems or suspicion of IUGR. Placentas were classified from grade 0 (most immature) to grade III (most mature). We did not exclude smokers or complicated pregnancies as we aimed to correlate the early appearance of mature placentas with placental function. After delivery, microscopical fields on formalin-fixed, trichrome-stained histological sections of each placenta were obtained by multistage systematic uniform random sampling. Using design-based stereological methods, the exchange surface areas of peripheral (terminal and intermediate) villi and their fetal capillaries and the arithmetic and harmonic mean thicknesses of the villous membrane (maternal surface of villous trophoblast to adluminal surface of vascular endothelium) were estimated. An index of the variability in thickness of this membrane, and an estimate of its oxygen diffusive conductance, were derived secondarily as were estimates of the mean diameters and total lengths of villi and fetal capillaries. Group comparisons were drawn using analysis of variance. We found no significant differences in placental volume or composition or in the dimensions or diffusive conductances of the villous membrane. Subsequent exclusion of smokers did not alter these main findings. Grannum grades at 31-34 weeks of gestation appear not to provide reliable predictors of the functional capacity of the term placenta as expressed by the surrogate measure, morphometric diffusive conductance.

  18. [A method for obtaining redshifts of quasars based on wavelet multi-scaling feature matching].

    PubMed

    Liu, Zhong-Tian; Li, Xiang-Ru; Wu, Fu-Chao; Zhao, Yong-Heng

    2006-09-01

    The LAMOST project, the world's largest sky survey project being implemented in China, is expected to obtain 10(5) quasar spectra. The main objective of the present article is to explore methods that can be used to estimate the redshifts of quasar spectra from LAMOST. Firstly, the features of the broad emission lines are extracted from the quasar spectra to overcome the disadvantage of low signal-to-noise ratio. Then the redshifts of quasar spectra can be estimated by using the multi-scaling feature matching. The experiment with the 15, 715 quasars from the SDSS DR2 shows that the correct rate of redshift estimated by the method is 95.13% within an error range of 0. 02. This method was designed to obtain the redshifts of quasar spectra with relative flux and a low signal-to-noise ratio, which is applicable to the LAMOST data and helps to study quasars and the large-scale structure of the universe etc.

  19. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  20. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  1. CREATING A DECISION CONTEXT FOR COMPARATIVE ANALYSIS AND CONSISTENT APPLICATION OF INHALATION DOSIMETRY MODELS IN CHILDREN'S RISK ASSESSMENT

    EPA Science Inventory

    Estimation of risks to children from exposure to airborne pollutants is often complicated by the lack of reliable epidemiological data specific to this age group. As a result, risks are generally estimated from extrapolations based on data obtained in other human age groups (e.g....

  2. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  3. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  4. Diagnosing a Strong-Fault Model by Conflict and Consistency

    PubMed Central

    Zhou, Gan; Feng, Wenquan

    2018-01-01

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302

  5. Analysis of short pulse laser altimetry data obtained over horizontal path

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Tsai, B. M.; Gardner, C. S.

    1983-01-01

    Recent pulsed measurements of atmospheric delay obtained by ranging to the more realistic targets including a simulated ocean target and an extended plate target are discussed. These measurements are used to estimate the expected timing accuracy of a correlation receiver system. The experimental work was conducted using a pulsed two color laser altimeter.

  6. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  7. Estimating discharge in rivers using remotely sensed hydraulic information

    USGS Publications Warehouse

    Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.

    2005-01-01

    A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.

  8. Subsonic stability and control derivatives for an unpowered, remotely piloted 3/8-scale F-15 airplane model obtained from flight test

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.; Shafer, M. F.

    1976-01-01

    In response to the interest in airplane configuration characteristics at high angles of attack, an unpowered remotely piloted 3/8-scale F-15 airplane model was flight tested. The subsonic stability and control characteristics of this airplane model over an angle of attack range of -20 to 53 deg are documented. The remotely piloted technique for obtaining flight test data was found to provide adequate stability and control derivatives. The remotely piloted technique provided an opportunity to test the aircraft mathematical model in an angle of attack regime not previously examined in flight test. The variation of most of the derivative estimates with angle of attack was found to be consistent, particularly when the data were supplemented by uncertainty levels.

  9. Estimating secular velocities from GPS data contaminated by postseismic motion at sites with limited pre-earthquake data

    NASA Astrophysics Data System (ADS)

    Murray, J. R.; Svarc, J. L.

    2016-12-01

    Constant secular velocities estimated from Global Positioning System (GPS)-derived position time series are a central input for modeling interseismic deformation in seismically active regions. Both postseismic motion and temporally correlated noise produce long-period signals that are difficult to separate from secular motion and can bias velocity estimates. For GPS sites installed post-earthquake it is especially challenging to uniquely estimate velocities and postseismic signals and to determine when the postseismic transient has decayed sufficiently to enable use of subsequent data for estimating secular rates. Within 60 km of the 2003 M6.5 San Simeon and 2004 M6 Parkfield earthquakes in California, 16 continuous GPS sites (group 1) were established prior to mid-2001, and 52 stations (group 2) were installed following the events. We use group 1 data to investigate how early in the post-earthquake time period one may reliably begin using group 2 data to estimate velocities. For each group 1 time series, we obtain eight velocity estimates using observation time windows with successively later start dates (2006 - 2013) and a parameterization that includes constant velocity, annual, and semi-annual terms but no postseismic decay. We compare these to velocities estimated using only pre-San Simeon data to find when the pre- and post-earthquake velocities match within uncertainties. To obtain realistic velocity uncertainties, for each time series we optimize a temporally correlated noise model consisting of white, flicker, random walk, and, in some cases, band-pass filtered noise contributions. Preliminary results suggest velocities can be reliably estimated using data from 2011 to the present. Ongoing work will assess velocity bias as a function of epicentral distance and length of post-earthquake time series as well as explore spatio-temporal filtering of detrended group 1 time series to provide empirical corrections for postseismic motion in group 2 time series.

  10. Results From F-18B Stability and Control Parameter Estimation Flight Tests at High Dynamic Pressures

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.; Noffz, Gregory K.; Iliff, Kenneth W.

    2000-01-01

    A maximum-likelihood output-error parameter estimation technique has been used to obtain stability and control derivatives for the NASA F-18B Systems Research Aircraft. This work has been performed to support flight testing of the active aeroelastic wing (AAW) F-18A project. The goal of this research is to obtain baseline F-18 stability and control derivatives that will form the foundation of the aerodynamic model for the AAW aircraft configuration. Flight data have been obtained at Mach numbers between 0.85 and 1.30 and at dynamic pressures ranging between 600 and 1500 lbf/sq ft. At each test condition, longitudinal and lateral-directional doublets have been performed using an automated onboard excitation system. The doublet maneuver consists of a series of single-surface inputs so that individual control-surface motions cannot be correlated with other control-surface motions. Flight test results have shown that several stability and control derivatives are significantly different than prescribed by the F-18B aerodynamic model. This report defines the parameter estimation technique used, presents stability and control derivative results, compares the results with predictions based on the current F-18B aerodynamic model, and shows improvements to the nonlinear simulation using updated derivatives from this research.

  11. Contour-based object orientation estimation

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel

    2016-04-01

    Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.

  12. Genetic parameter estimation for long endurance trials in the Uruguayan Criollo horse.

    PubMed

    López-Correa, R D; Peñagaricano, F; Rovere, G; Urioste, J I

    2018-06-01

    The aim of this study was to estimate the genetic parameters of performance in a 750-km, 15-day ride in Criollo horses. Heritability (h 2 ) and maternal lineage effects (mt 2 ) were obtained for rank, a relative placing measure of performance. Additive genetic and maternal lineage (rmt) correlations among five medium-to-high intensity phase ranks (pRK) and final rank (RK) were also estimated. Individual records from 1,236 Criollo horses from 1979 to 2012 were used. A multivariate threshold animal model was applied to the pRK and RK. Heritability was moderate to low (0.156-0.275). Estimates of mt 2 were consistently low (0.04-0.06). Additive genetic correlations between individual pRK and RK were high (0.801-0.924), and the genetic correlations between individual pRKs ranged from 0.763 to 0.847. The pRK heritabilities revealed that some phases were explained by a greater additive component, whereas others showed stronger genetic relationships with RK. Thus, not all pRK may be considered as similar measures of performance in competition. © 2018 Blackwell Verlag GmbH.

  13. Estimation of the water retention curve from the soil hydraulic conductivity and sorptivity in an upward infiltration process

    NASA Astrophysics Data System (ADS)

    Moret-Fernández, David; Angulo, Marta; Latorre, Borja; González-Cebollada, César; López, María Victoria

    2017-04-01

    Determination of the saturated hydraulic conductivity, Ks, and the α and n parameters of the van Genuchten (1980) water retention curve, θ(h), are fundamental to fully understand and predict soil water distribution. This work presents a new procedure to estimate the soil hydraulic properties from the inverse analysis of a single cumulative upward infiltration curve followed by an overpressure step at the end of the wetting process. Firstly, Ks is calculated by the Darcy's law from the overpressure step. The soil sorptivity (S) is then estimated using the Haverkamp et al., (1994) equation. Next, a relationship between α and n, f(α,n), is calculated from the estimated Sand Ks. The α and n values are finally obtained by the inverse analysis of the experimental data after applying the f(α,n) relationship to the HYDRUS-1D model. The method was validated on theoretical synthetic curves for three different soils (sand, loam and clay), and subsequently tested on experimental sieved soils (sand, loam, clay loam and clay) of known hydraulic properties. A robust relationship was observed between the theoretical α and nvalues (R2 > 0.99) of the different synthetic soils and those estimated from inverse analysis of the upward infiltration curve. Consistent results were also obtained for the experimental soils (R2 > 0.85). These results demonstrated that this technique allowed accurate estimates of the soil hydraulic properties for a wide range of textures, including clay soils.

  14. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  15. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    NASA Astrophysics Data System (ADS)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  16. Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria

    NASA Astrophysics Data System (ADS)

    Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.

    2009-04-01

    Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and

  17. Estimation in SEM: A Concrete Example

    ERIC Educational Resources Information Center

    Ferron, John M.; Hess, Melinda R.

    2007-01-01

    A concrete example is used to illustrate maximum likelihood estimation of a structural equation model with two unknown parameters. The fitting function is found for the example, as are the vector of first-order partial derivatives, the matrix of second-order partial derivatives, and the estimates obtained from each iteration of the Newton-Raphson…

  18. Ambulatory estimation of foot placement during walking using inertial sensors.

    PubMed

    Martin Schepers, H; van Asseldonk, Edwin H F; Baten, Chris T M; Veltink, Peter H

    2010-12-01

    This study proposes a method to assess foot placement during walking using an ambulatory measurement system consisting of orthopaedic sandals equipped with force/moment sensors and inertial sensors (accelerometers and gyroscopes). Two parameters, lateral foot placement (LFP) and stride length (SL), were estimated for each foot separately during walking with eyes open (EO), and with eyes closed (EC) to analyze if the ambulatory system was able to discriminate between different walking conditions. For validation, the ambulatory measurement system was compared to a reference optical position measurement system (Optotrak). LFP and SL were obtained by integration of inertial sensor signals. To reduce the drift caused by integration, LFP and SL were defined with respect to an average walking path using a predefined number of strides. By varying this number of strides, it was shown that LFP and SL could be best estimated using three consecutive strides. LFP and SL estimated from the instrumented shoe signals and with the reference system showed good correspondence as indicated by the RMS difference between both measurement systems being 6.5 ± 1.0 mm (mean ± standard deviation) for LFP, and 34.1 ± 2.7 mm for SL. Additionally, a statistical analysis revealed that the ambulatory system was able to discriminate between the EO and EC condition, like the reference system. It is concluded that the ambulatory measurement system was able to reliably estimate foot placement during walking. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Consistency relation and non-Gaussianity in a Galileon inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asadi, Kosar; Nozari, Kourosh, E-mail: k.asadi@stu.umz.ac.ir, E-mail: knozari@umz.ac.ir

    2016-12-01

    We study a particular Galileon inflation in the light of Planck2015 observational data in order to constraint the model parameter space. We study the spectrum of the primordial modes of the density perturbations by expanding the action up to the second order in perturbations. Then we pursue by expanding the action up to the third order and find the three point correlation functions to find the amplitude of the non-Gaussianity of the primordial perturbations in this setup. We study the amplitude of the non-Gaussianity both in equilateral and orthogonal configurations and test the model with recent observational data. Our analysismore » shows that for some ranges of the non-minimal coupling parameter, the model is consistent with observation and it is also possible to have large non-Gaussianity which would be observable by future improvements in experiments. Moreover, we obtain the tilt of the tensor power spectrum and test the standard inflationary consistency relation ( r = −8 n {sub T} ) against the latest bounds from the Planck2015 dataset. We find a slight deviation from the standard consistency relation in this setup. Nevertheless, such a deviation seems not to be sufficiently remarkable to be detected confidently.« less

  20. An overview of coefficient alpha and a reliability matrix for estimating adequacy of internal consistency coefficients with psychological research measures.

    PubMed

    Ponterotto, Joseph G; Ruckdeschel, Daniel E

    2007-12-01

    The present article addresses issues in reliability assessment that are often neglected in psychological research such as acceptable levels of internal consistency for research purposes, factors affecting the magnitude of coefficient alpha (alpha), and considerations for interpreting alpha within the research context. A new reliability matrix anchored in classical test theory is introduced to help researchers judge adequacy of internal consistency coefficients with research measures. Guidelines and cautions in applying the matrix are provided.

  1. LACIE large area acreage estimation. [United States of America

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)

    1979-01-01

    A sample wheat acreage for a large area is obtained by multiplying its small grains acreage estimate as computed by the classification and mensuration subsystem by the best available ratio of wheat to small grains acreages obtained from historical data. In the United States, as in other countries with detailed historical data, an additional level of aggregation was required because sample allocation was made at the substratum level. The essential features of the estimation procedure for LACIE countries are included along with procedures for estimating wheat acreage in the United States.

  2. Estimating the Value of Life, Injury, and Travel Time Saved Using a Stated Preference Framework.

    PubMed

    Niroomand, Naghmeh; Jenkins, Glenn P

    2016-06-01

    The incidence of fatality over the period 2010-2014 from automobile accidents in North Cyprus is 2.75 times greater than the average for the EU. With the prospect of North Cyprus entering the EU, many investments will need to be undertaken to improve road safety in order to reach EU benchmarks. The objective of this study is to provide local estimates of the value of a statistical life and injury along with the value of time savings. These are among the parameter values needed for the evaluation of the change in the expected incidence of automotive accidents and time savings brought about by such projects. In this study we conducted a stated choice experiment to identify the preferences and tradeoffs of automobile drivers in North Cyprus for improved travel times, travel costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers choose. These estimates were used to assess the individuals' willingness to pay (WTP) to avoid fatalities and injuries and to save travel time. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of injury (VI) prevented, and the value per hour of travel time saved. The estimates for the VSL range from €315,293 to €1,117,856 and the estimates of VI from € 5,603 to € 28,186. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Future emissions pathways consistent with limiting warming to 1.5°C

    NASA Astrophysics Data System (ADS)

    Millar, R.; Fuglestvedt, J. S.; Grubb, M.; Rogelj, J.; Skeie, R. B.; Friedlingstein, P.; Forster, P.; Frame, D. J.; Pierrehumbert, R.; Allen, M. R.

    2016-12-01

    The stated aim of the 2015 UNFCCC Paris Agreement is `holding the increase in global average temperature to well below 2°C above pre-industrial levels and to pursue efforts to limit temperature increases to 1.5°C'. We show that emissions reductions proportional to those achieved in an ambitious mitigation scenario, RCP2.6, but beginning in 2017, give a median estimated peak warming of 1.5°C, with a likely (66% probability) range of uncertainty of 1.2-2.0°C. Such a scenario would be approximately consistent with the most ambitious interpretation of the 2030 emissions pledges, but requires reduction rates exceeding 0.3GtC/yr/yr after 2030. A steady reduction at less than half this rate would achieve the same temperature outcome if initiated in 2020. Limiting total CO2 emissions after 2015 to 200GtC would limit future warming to likely less than 0.6°C above present, consistent with 1.5°C above pre-industrial, based on the distribution of responses of the CMIP5 Earth System, but the CMIP5 simulations do not correspond to scenarios that aim to limit warming to such low levels. If future CO2 emissions are successfully adapted to the emerging climate response so as to limit warming in 2100 to 0.6°C above present, and non-CO2 emissions follow the ambitious RCP2.6 scenario, then we estimate that resulting CO2 emissions will unlikely be restricted to less than 250GtC given current uncertainties in climate system response, although still-poorly-modelled carbon cycle feedbacks, such as release from permafrost, may encroach on this budget. Even under a perfectly successful adaptive mitigation regime, emissions consistent with limiting warming to 0.6°C above present are unlikely to be greater than 500GtC.These estimates suggest the 1.5°C goal may not yet be geophysically insurmountable but will nevertheless require, at minimum, the full implementation of the most ambitious interpretation of the Paris pledges followed by accelerated and more fundamental changes in our

  4. The challenge of obtaining information necessary for multi-criteria decision analysis implementation: the case of physiotherapy services in Canada

    PubMed Central

    2013-01-01

    Background As fiscal constraints dominate health policy discussions across Canada and globally, priority-setting exercises are becoming more common to guide the difficult choices that must be made. In this context, it becomes highly desirable to have accurate estimates of the value of specific health care interventions. Economic evaluation is a well-accepted method to estimate the value of health care interventions. However, economic evaluation has significant limitations, which have lead to an increase in the use of Multi-Criteria Decision Analysis (MCDA). One key concern with MCDA is the availability of the information necessary for implementation. In the Fall 2011, the Canadian Physiotherapy Association embarked on a project aimed at providing a valuation of physiotherapy services that is both evidence-based and relevant to resource allocation decisions. The framework selected for this project was MCDA. We report on how we addressed the challenge of obtaining some of the information necessary for MCDA implementation. Methods MCDA criteria were selected and areas of physiotherapy practices were identified. The building up of the necessary information base was a three step process. First, there was a literature review for each practice area, on each criterion. The next step was to conduct interviews with experts in each of the practice areas to critique the results of the literature review and to fill in gaps where there was no or insufficient literature. Finally, the results of the individual interviews were validated by a national committee to ensure consistency across all practice areas and that a national level perspective is applied. Results Despite a lack of research evidence on many of the considerations relevant to the estimation of the value of physiotherapy services (the criteria), sufficient information was obtained to facilitate MCDA implementation at the local level. Conclusions The results of this research project serve two purposes: 1) a method to

  5. The Improved Estimation of Ratio of Two Population Proportions

    ERIC Educational Resources Information Center

    Solanki, Ramkrishna S.; Singh, Housila P.

    2016-01-01

    In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…

  6. New Generation VLBI: Intraday UT1 Estimations

    NASA Astrophysics Data System (ADS)

    Ipatov, Alexander; Ivanov, Dmitriy; Ilin, Gennadiy; Smolentsev, Sergei; Gayazov, Iskander; Mardyshkin, Vyacheslav; Fedotov, Leonid; Stempkovski, Victor; Vytnov, Alexander; Salnikov, Alexander; Surkis, Igor; Mikhailov, Andrey; Marshalov, Dmitriy; Bezrukov, Ilya; Melnikov, Alexey; Ken, Voytsekh; Kurdubov, Sergei

    2016-12-01

    IAA finished work on the creation of the new generation radio interferometer with two VGOS antennas co-located at Badary and Zelenchukskaya. 48 single baseline one-hour VLBI sessions (up to four sessions per day) were performed from 04 Nov to 18 Nov 2015. Observations were carried out using wideband S/X receivers, three X-band and one S-band 512 MHz channels at one or two circular polarizations. Sessions consisted of about 60 scans with a 22-second minimum scan duration. The stations' broadband acquisition systems generated 1.5-3 TB data per session, which were transferred via Internet to the IAA FX correlator. The accuracy of the group delay in a single channel was 10-20 ps, which allows the use of every single channel's observations for geodetic analysis without synthesis. 156 single channel NGS-cards were obtained in total. The RMS of the differences between UT1-UTC estimates and IERS finals values is 19 μs.

  7. Ultraspectral sounding retrieval error budget and estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping

    2011-11-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).

  8. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  9. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    NASA Astrophysics Data System (ADS)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  10. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  11. Dosimetric Consistency of Co-60 Teletherapy Unit- a ten years Study

    PubMed Central

    Baba, Misba H; Mohib-ul-Haq, M.; Khan, Aijaz A.

    2013-01-01

    Objective The goal of the Radiation standards and Dosimetry is to ensure that the output of the Teletherapy Unit is within ±2% of the stated one and the output of the treatment dose calculation methods are within ±5%. In the present paper, we studied the dosimetry of Cobalt-60 (Co-60) Teletherapy unit at Sher-I-Kashmir Institute of Medical Sciences (SKIMS) for last 10 years. Radioactivity is the phenomenon of disintegration of unstable nuclides called radionuclides. Among these radionuclides, Cobalt-60, incorporated in Telecobalt Unit, is commonly used in therapeutic treatment of cancer. Cobalt-60 being unstable decays continuously into Ni-60 with half life of 5.27 years thereby resulting in the decrease in its activity, hence dose rate (output). It is, therefore, mandatory to measure the dose rate of the Cobalt-60 source regularly so that the patient receives the same dose every time as prescribed by the radiation oncologist. The under dosage may lead to unsatisfactory treatment of cancer and over dosage may cause radiation hazards. Our study emphasizes the consistency between actual output and output obtained using decay method. Methodology The methodology involved in the present study is the calculations of actual dose rate of Co-60 Teletherapy Unit by two techniques i.e. Source to Surface Distance (SSD) and Source to Axis Distance (SAD), used for the External Beam Radiotherapy, of various cancers, using the standard methods. Thereby, a year wise comparison has been made between average actual dosimetric output (dose rate) and the average expected output values (obtained by using decay method for Co-60.) Results The present study shows that there is a consistency in the average output (dose rate) obtained by the actual dosimetry values and the expected output values obtained using decay method. The values obtained by actual dosimetry are within ±2% of the expected values. Conclusion The results thus obtained in a year wise comparison of average output by

  12. Dosimetric Consistency of Co-60 Teletherapy Unit- a ten years Study.

    PubMed

    Baba, Misba H; Mohib-Ul-Haq, M; Khan, Aijaz A

    2013-01-01

    The goal of the Radiation standards and Dosimetry is to ensure that the output of the Teletherapy Unit is within ±2% of the stated one and the output of the treatment dose calculation methods are within ±5%. In the present paper, we studied the dosimetry of Cobalt-60 (Co-60) Teletherapy unit at Sher-I-Kashmir Institute of Medical Sciences (SKIMS) for last 10 years. Radioactivity is the phenomenon of disintegration of unstable nuclides called radionuclides. Among these radionuclides, Cobalt-60, incorporated in Telecobalt Unit, is commonly used in therapeutic treatment of cancer. Cobalt-60 being unstable decays continuously into Ni-60 with half life of 5.27 years thereby resulting in the decrease in its activity, hence dose rate (output). It is, therefore, mandatory to measure the dose rate of the Cobalt-60 source regularly so that the patient receives the same dose every time as prescribed by the radiation oncologist. The under dosage may lead to unsatisfactory treatment of cancer and over dosage may cause radiation hazards. Our study emphasizes the consistency between actual output and output obtained using decay method. The methodology involved in the present study is the calculations of actual dose rate of Co-60 Teletherapy Unit by two techniques i.e. Source to Surface Distance (SSD) and Source to Axis Distance (SAD), used for the External Beam Radiotherapy, of various cancers, using the standard methods. Thereby, a year wise comparison has been made between average actual dosimetric output (dose rate) and the average expected output values (obtained by using decay method for Co-60.). The present study shows that there is a consistency in the average output (dose rate) obtained by the actual dosimetry values and the expected output values obtained using decay method. The values obtained by actual dosimetry are within ±2% of the expected values. The results thus obtained in a year wise comparison of average output by actual dosimetry done regularly as a part of

  13. Outgassing Total Mass Loss Obtained with Micro-CVCM and Other Vacuum Systems

    NASA Technical Reports Server (NTRS)

    Scialdone, John; Isaac, Peggy; Clatterbuck, Carroll; Hunkeler, Ronald; Powers, Edward I. (Technical Monitor)

    2000-01-01

    Several instruments including the Cahn Microbalance, the Knudsen Cell, the micro-CVCM, and the vacuum Thermogravimetric Analyzer (TGA) were used in the testing of a graphite epoxy (GR/EP) composite that is proposed for use as a rigidizing element of an inflatable deployment system. This GR/EP will be cured in situ. The purpose of this testing is to estimate the gaseous production resulting from the curing of the GR/EP composite, to predict the resulting pressure, and to calculate the required venting. Every test was conducted under vacuum at 125 C for 24 hours. Upon comparison of the results, the ASTM E-595 was noted to have given readings that were consistently lower than those obtained using the other instruments, which otherwise provided similar results. The GR/EP was tested using several different geometric arrangements. This paper describes the analysis evaluating the molecular and continuum flow of the outgassing products issuing from the exit port of the ASTM E-595 system. The effective flow conductance provided by the physical dimensions of the vent passage of the ASTM E-595 system and that of the material sample among other factors were investigated to explain the reduced amount of outgassing released during the 24-hour test period,

  14. Outgassing Total Mass Loss Obtained with Micro-CVCM and other Vacuum Systems

    NASA Technical Reports Server (NTRS)

    Scialdone, John J.; Isaac, Peggy A.; Clatterbuck, Carroll H.; Hunkeler, Ronald E.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    Several instruments including the Cahn Microbalance the Knudsen Cell, the micro-CVCM, and the vacuum Thermogravimetric Analyzer (TGA) were used in the testing of a graphite epoxy (GR/EP) composite that is proposed for use as a rigidizing element of an inflatable deployment system. This GR/EP will be cured in situ. The purpose of this testing is to estimate the gaseous production resulting from the curing of the GR/EP composite, to predict the resulting pressure, and to calculate the required venting. Every test was conducted under vacuum at 125 C for 24 hours. Upon comparison of the results, the ASTM E-595 was noted to have given readings that were consistently lower than those obtained using the other instruments, which otherwise provided similar results. The GR/EP was tested using several different geometric arrangements. This paper describes the analysis evaluating the molecular and continuum flow of the outgassing products issuing from the exit port of the ASTM E-595 system. The effective flow conductance provided by the physical dimensions of the vent passage of the ASTM E-595 system and that of the material sample among other factors were investigated to explain the reduced amount of outgassing released during the 24-hour test period

  15. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    PubMed Central

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2017-01-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. PMID:22528468

  16. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  17. Estimating healthcare resource use associated with the treatment of metastatic melanoma in eight countries.

    PubMed

    McKendrick, Jan; Gijsen, Merel; Quinn, Casey; Barber, Beth; Zhao, Zhongyun

    2016-06-01

    Objectives Studies reporting healthcare resourse use (HRU) for melanoma, one of the most costly cancers to treat, are limited. Using consistent, robust methodology, this study estimated HRU associated with the treatment of metastatic melanoma in eight countries. Methods Using published literature and clinician input, treatment phases were identified: active systemic treatment (pre-progression); disease progression; best supportive care (BSC)/palliative care; and terminal care. HRU elements were identified for each phase and estimates of the magnitude and frequency of use in clinical practice were obtained through country-specific Delphi panels, comprising healthcare professionals with experience in oncology (n = 8). Results Medical oncologists are the key care providers for patients with metastatic melanoma, although in Germany dermato-oncologists also lead care. During the active systemic treatment phase, each patient was estimated to require 0.83-2 consultations with a medical oncologist/month across countries; the median number of such assessments in 3 months was highest in Canada (range = 3.5-5) and lowest in France, the Netherlands and Spain (1). Resource use during the disease progression phase was intensive and similar across countries: all patients were estimated to consult with medical oncologists and 10-40% with a radiation oncologist; up to 40% were estimated to require a brain MRI scan. During the BSC/palliative care phase, all patients were estimated to consult with medical oncologists, and most to consult with a primary care physician (40-100%). Limitations Panelists were from centers of excellence, thus results may not reflect care within smaller hospitals; data obtained from experts may be less variable than data from broader clinical practice. Treatments for metastatic melanoma are continually emerging, thus some elements of our work could be superseded. Conclusions HRU estimates were substantial and varied across countries for some

  18. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  19. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  20. A Kinematically Consistent Two-Point Correlation Function

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.

    1998-01-01

    A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.

  1. Self consistent field theory of virus assembly

    NASA Astrophysics Data System (ADS)

    Li, Siyu; Orland, Henri; Zandi, Roya

    2018-04-01

    The ground state dominance approximation (GSDA) has been extensively used to study the assembly of viral shells. In this work we employ the self-consistent field theory (SCFT) to investigate the adsorption of RNA onto positively charged spherical viral shells and examine the conditions when GSDA does not apply and SCFT has to be used to obtain a reliable solution. We find that there are two regimes in which GSDA does work. First, when the genomic RNA length is long enough compared to the capsid radius, and second, when the interaction between the genome and capsid is so strong that the genome is basically localized next to the wall. We find that for the case in which RNA is more or less distributed uniformly in the shell, regardless of the length of RNA, GSDA is not a good approximation. We observe that as the polymer-shell interaction becomes stronger, the energy gap between the ground state and first excited state increases and thus GSDA becomes a better approximation. We also present our results corresponding to the genome persistence length obtained through the tangent-tangent correlation length and show that it is zero in case of GSDA but is equal to the inverse of the energy gap when using SCFT.

  2. Self-consistent theory of nanodomain formation on non-polar surfaces of ferroelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozovska, Anna N.; Obukhovskii, Vyacheslav; Fomichov, Evhen

    2016-04-28

    We propose a self-consistent theoretical approach capable of describing the features of the anisotropic nanodomain formation induced by a strongly inhomogeneous electric field of a charged scanning probe microscopy tip on nonpolar cuts of ferroelectrics. We obtained that a threshold field, previously regarded as an isotropic parameter, is an anisotropic function that is specified from the polar properties and lattice pinning anisotropy of a given ferroelectric in a self-consistent way. The proposed method for the calculation of the anisotropic threshold field is not material specific, thus the field should be anisotropic in all ferroelectrics with the spontaneous polarization anisotropy alongmore » the main crystallographic directions. The most evident examples are uniaxial ferroelectrics, layered ferroelectric perovskites, and low-symmetry incommensurate ferroelectrics. Obtained results quantitatively describe the differences at several times in the nanodomain length experimentally observed on X and Y cuts of LiNbO3 and can give insight into the anisotropic dynamics of nanoscale polarization reversal in strongly inhomogeneous electric fields.« less

  3. Empirical Bayes Gaussian likelihood estimation of exposure distributions from pooled samples in human biomonitoring.

    PubMed

    Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng

    2014-12-10

    Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.

    PubMed

    Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi

    2018-05-28

    Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.

  5. From Nothing to Something II: Nonlinear Systems via Consistent Correlated Bang

    NASA Astrophysics Data System (ADS)

    Lou, Sen-Yue

    2017-06-01

    Chinese ancient sage Laozi said everything comes from \\emph{\\bf \\em "nothing"}. \\rm In the first letter (Chin. Phys. Lett. 30 (2013) 080202), infinitely many discrete integrable systems have been obtained from "nothing" via simple principles (Dao). In this second letter, a new idea, the consistent correlated bang, is introduced to obtain nonlinear dynamic systems including some integrable ones such as the continuous nonlinear Schr\\"odinger equation (NLS), the (potential) Korteweg de Vries (KdV) equation, the (potential) Kadomtsev-Petviashvili (KP) equation and the sine-Gordon (sG) equation. These nonlinear systems are derived from nothing via suitable "Dao", the shifted parity, the charge conjugate, the delayed time reversal, the shifted exchange, the shifted-parity-rotation and so on.

  6. Is Active Tectonics on Madagascar Consistent with Somalian Plate Kinematics?

    NASA Astrophysics Data System (ADS)

    Stamps, D. S.; Kreemer, C.; Rajaonarison, T. A.

    2017-12-01

    The East African Rift System (EARS) actively breaks apart the Nubian and Somalian tectonic plates. Madagascar finds itself at the easternmost boundary of the EARS, between the Rovuma block, Lwandle plate, and the Somalian plate. Earthquake focal mechanisms and N-S oriented fault structures on the continental island suggest that Madagascar is experiencing east-west oriented extension. However, some previous plate kinematic studies indicate minor compressional strains across Madagascar. This inconsistency may be due to uncertainties in Somalian plate rotation. Past estimates of the rotation of the Somalian plate suffered from a poor coverage of GPS stations, but some important new stations are now available for a re-evaluation. In this work, we revise the kinematics of the Somalian plate. We first calculate a new GPS velocity solution and perform block kinematic modeling to evaluate the Somalian plate rotation. We then estimate new Somalia-Rovuma and Somalia-Lwandle relative motions across Madagascar and evaluate whether they are consistent with GPS measurements made on the island itself, as well as with other kinematic indicators.

  7. Fiber Orientation Estimation Guided by a Deep Network.

    PubMed

    Ye, Chuyang; Prince, Jerry L

    2017-09-01

    Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain's white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs. However, accurate estimation of complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent diffusion signals. To estimate the mixture fractions of the dictionary atoms, a deep network is designed to solve the sparse reconstruction problem. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding a dense basis of FOs is used and a weighted ℓ 1 -norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and typical clinical dMRI data. The results demonstrate the benefit of using a deep network for FO estimation.

  8. Estimating pregnancy-related mortality from census data: experience in Latin America

    PubMed Central

    Queiroz, Bernardo L; Wong, Laura; Plata, Jorge; Del Popolo, Fabiana; Rosales, Jimmy; Stanton, Cynthia

    2009-01-01

    Abstract Objective To assess the feasibility of measuring maternal mortality in countries lacking accurate birth and death registration through national population censuses by a detailed evaluation of such data for three Latin American countries. Methods We used established demographic techniques, including the general growth balance method, to evaluate the completeness and coverage of the household death data obtained through population censuses. We also compared parity to cumulative fertility data to evaluate the coverage of recent household births. After evaluating the data and adjusting it as necessary, we calculated pregnancy-related mortality ratios (PRMRs) per 100 000 live births and used them to estimate maternal mortality. Findings The PRMRs for Honduras (2001), Nicaragua (2005) and Paraguay (2002) were 168, 95 and 178 per 100 000 live births, respectively. Surprisingly, evaluation of the data for Nicaragua and Paraguay showed overreporting of adult deaths, so a downward adjustment of 20% to 30% was required. In Honduras, the number of adult female deaths required substantial upward adjustment. The number of live births needed minimal adjustment. The adjusted PRMR estimates are broadly consistent with existing estimates of maternal mortality from various data sources, though the comparison varies by source. Conclusion Census data can be used to measure pregnancy-related mortality as a proxy for maternal mortality in countries with poor death registration. However, because our data were obtained from countries with reasonably good statistical systems and literate populations, we cannot be certain the methods employed in the study will be equally useful in more challenging environments. Our data evaluation and adjustment methods worked, but with considerable uncertainty. Ways of quantifying this uncertainty are needed. PMID:19551237

  9. Regional and seasonal estimates of fractional storm coverage based on station precipitation observations

    NASA Technical Reports Server (NTRS)

    Gong, Gavin; Entekhabi, Dara; Salvucci, Guido D.

    1994-01-01

    Simulated climates using numerical atmospheric general circulation models (GCMs) have been shown to be highly sensitive to the fraction of GCM grid area assumed to be wetted during rain events. The model hydrologic cycle and land-surface water and energy balance are influenced by the parameter bar-kappa, which is the dimensionless fractional wetted area for GCM grids. Hourly precipitation records for over 1700 precipitation stations within the contiguous United States are used to obtain observation-based estimates of fractional wetting that exhibit regional and seasonal variations. The spatial parameter bar-kappa is estimated from the temporal raingauge data using conditional probability relations. Monthly bar-kappa values are estimated for rectangular grid areas over the contiguous United States as defined by the Goddard Institute for Space Studies 4 deg x 5 deg GCM. A bias in the estimates is evident due to the unavoidably sparse raingauge network density, which causes some storms to go undetected by the network. This bias is corrected by deriving the probability of a storm escaping detection by the network. A Monte Carlo simulation study is also conducted that consists of synthetically generated storm arrivals over an artificial grid area. It is used to confirm the bar-kappa estimation procedure and to test the nature of the bias and its correction. These monthly fractional wetting estimates, based on the analysis of station precipitation data, provide an observational basis for assigning the influential parameter bar-kappa in GCM land-surface hydrology parameterizations.

  10. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    ERIC Educational Resources Information Center

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  11. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis

    PubMed Central

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed. PMID:26167524

  12. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.

    PubMed

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  13. FIESTA—An R estimation tool for FIA analysts

    Treesearch

    Tracey S. Frescino; Paul L. Patterson; Gretchen G. Moisen; Elizabeth A. Freeman

    2015-01-01

    FIESTA (Forest Inventory ESTimation for Analysis) is a user-friendly R package that was originally developed to support the production of estimates consistent with current tools available for the Forest Inventory and Analysis (FIA) National Program, such as FIDO (Forest Inventory Data Online) and EVALIDator. FIESTA provides an alternative data retrieval and reporting...

  14. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations

    PubMed Central

    2013-01-01

    Background Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Methods Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons’ camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to “gold standard” reference population figures from census or other robust methods. Results Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of <10% in four sites and 10–30% in three sites, but severely over-estimated the population in an Ethiopian camp with implausible occupancy data and two post-earthquake Haiti sites featuring dense and complex residential layout. For each site, estimates were produced in 2–5 working person-days. Conclusions In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable

  15. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    PubMed

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of <10% in four sites and 10-30% in three sites, but severely over-estimated the population in an Ethiopian camp with implausible occupancy data and two post-earthquake Haiti sites featuring dense and complex residential layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or

  16. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  17. Estimation of porphyrin concentration in the kerogen fraction of shales using high-resolution reflectance spectroscopy

    NASA Technical Reports Server (NTRS)

    Holden, Peter N.; Gaffey, Michael J.; Sundararaman, P.

    1991-01-01

    An interpretive model for estimating porphyrin concentration in bitumen and kerogen from spectral reaflectance data in the visible and near-ultraviolet region of the spectrum is derived and calibrated. Preliminary results obtained using the model are consistent with concentrations determined from the bitumen extract and suggest that 40 to 60 percent of the total porphyrin concentration remains in the kerogen after extraction of bitumen from thermally immature samples. The reflectance technique will contribute to porphyrin and kerogen studies and can be applied at its present level of development to several areas of geologic and paleo-oceanographic research.

  18. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks

    PubMed Central

    2010-01-01

    Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862

  19. Statistically Self-Consistent and Accurate Errors for SuperDARN Data

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.

    2018-01-01

    The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.

  20. Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.

    PubMed

    Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth

    2018-03-26

    When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.