Sample records for obtain independent estimates

  1. Device-independent point estimation from finite data and its application to device-independent property estimation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Sheng; Rosset, Denis; Zhang, Yanbao; Bancal, Jean-Daniel; Liang, Yeong-Cherng

    2018-03-01

    The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a finite number of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable, and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of any Bell inequality tailored for the specific property and the specific distribution of interest.

  2. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  3. Device-independent randomness generation from several Bell estimators

    NASA Astrophysics Data System (ADS)

    Nieto-Silleras, Olmo; Bamps, Cédric; Silman, Jonathan; Pironio, Stefano

    2018-02-01

    Device-independent randomness generation and quantum key distribution protocols rely on a fundamental relation between the non-locality of quantum theory and its random character. This relation is usually expressed in terms of a trade-off between the probability of guessing correctly the outcomes of measurements performed on quantum systems and the amount of violation of a given Bell inequality. However, a more accurate assessment of the randomness produced in Bell experiments can be obtained if the value of several Bell expressions is simultaneously taken into account, or if the full set of probabilities characterizing the behavior of the device is considered. We introduce protocols for device-independent randomness generation secure against classical side information, that rely on the estimation of an arbitrary number of Bell expressions or even directly on the experimental frequencies of measurement outcomes. Asymptotically, this results in an optimal generation of randomness from experimental data (as measured by the min-entropy), without having to assume beforehand that the devices violate a specific Bell inequality.

  4. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  5. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  6. A Model-independent Photometric Redshift Estimator for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Wang, Yun

    2007-01-01

    The use of Type Ia supernovae (SNe Ia) as cosmological standard candles is fundamental in modern observational cosmology. In this Letter, we derive a simple empirical photometric redshift estimator for SNe Ia using a training set of SNe Ia with multiband (griz) light curves and spectroscopic redshifts obtained by the Supernova Legacy Survey (SNLS). This estimator is analytical and model-independent it does not use spectral templates. We use all the available SNe Ia from SNLS with near-maximum photometry in griz (a total of 40 SNe Ia) to train and test our photometric redshift estimator. The difference between the estimated redshifts zphot and the spectroscopic redshifts zspec, (zphot-zspec)/(1+zspec), has rms dispersions of 0.031 for 20 SNe Ia used in the training set, and 0.050 for 20 SNe Ia not used in the training set. The dispersion is of the same order of magnitude as the flux uncertainties at peak brightness for the SNe Ia. There are no outliers. This photometric redshift estimator should significantly enhance the ability of observers to accurately target high-redshift SNe Ia for spectroscopy in ongoing surveys. It will also dramatically boost the cosmological impact of very large future supernova surveys, such as those planned for the Advanced Liquid-mirror Probe for Astrophysics, Cosmology, and Asteroids (ALPACA) and the Large Synoptic Survey Telescope (LSST).

  7. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  8. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  9. Physician-scientists in obstetrics and gynecology: predictors of success in obtaining independent research funding.

    PubMed

    Okeigwe, Ijeoma; Wang, Cynthia; Politch, Joseph A; Heffner, Linda J; Kuohung, Wendy

    2017-07-01

    Obstetrics and gynecology departments receive the smallest amount of National Institutes of Health research funding and have significantly lower application success rates compared to pediatric, internal medicine, and surgery departments. The development of mentored early career development training grants (K awards) has been one strategy implemented by the National Institutes of Health to help aspiring physician-scientists establish independent research careers. The purpose of this study is to describe the cohort of obstetrics and gynecology physician-scientists who were K08, K12, and K23 recipients from 1988 through 2015 and to identify predictors of success in obtaining independent federal funding, as defined by acquisition of an R01, R21, R34, U01, U54, P01, or P50 award. We hypothesized that sex, subspecialty, type of K award, and dual MD/PhD would impact success rates. K08, K12, and K23 recipients from 1988 through 2015 were identified from the National Institutes of Health Research Portfolio Online Reporting Tools, the office of the National Institutes of Health Freedom of Information Act, and the website of the Reproductive Scientist Development Program. Data were stratified by sex, educational degree, subspecialty, and type of K award. Data were analyzed using the Pearson χ 2 and Fisher exact tests. The Kaplan-Meier estimator was used to determine rates of conversion to independent funding over time. A total of 388 K recipients were identified. Women accounted for 66% of K awards while men accounted for 34%. Among K recipients, 82% were MDs, while 18% were MD/PhDs. K12 awards accounted for 82% of all K awards, while K08 and K23 awards accounted for 10% and 8%, respectively. Subspecialists in maternal-fetal medicine and reproductive endocrinology and infertility received the highest proportion of K awards, followed by generalists and gynecologic oncologists. Altogether, the 3 subspecialty groups accounted for 68% of all K awards. R01 awards made up the bulk

  10. CO component estimation based on the independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less

  11. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  12. Estimation of Longitudinal Force and Sideslip Angle for Intelligent Four-Wheel Independent Drive Electric Vehicles by Observer Iteration and Information Fusion.

    PubMed

    Chen, Te; Chen, Long; Xu, Xing; Cai, Yingfeng; Jiang, Haobin; Sun, Xiaoqiang

    2018-04-20

    Exact estimation of longitudinal force and sideslip angle is important for lateral stability and path-following control of four-wheel independent driven electric vehicle. This paper presents an effective method for longitudinal force and sideslip angle estimation by observer iteration and information fusion for four-wheel independent drive electric vehicles. The electric driving wheel model is introduced into the vehicle modeling process and used for longitudinal force estimation, the longitudinal force reconstruction equation is obtained via model decoupling, the a Luenberger observer and high-order sliding mode observer are united for longitudinal force observer design, and the Kalman filter is applied to restrain the influence of noise. Via the estimated longitudinal force, an estimation strategy is then proposed based on observer iteration and information fusion, in which the Luenberger observer is applied to achieve the transcendental estimation utilizing less sensor measurements, the extended Kalman filter is used for a posteriori estimation with higher accuracy, and a fuzzy weight controller is used to enhance the adaptive ability of observer system. Simulations and experiments are carried out, and the effectiveness of proposed estimation method is verified.

  13. Estimation of Longitudinal Force and Sideslip Angle for Intelligent Four-Wheel Independent Drive Electric Vehicles by Observer Iteration and Information Fusion

    PubMed Central

    Chen, Long; Xu, Xing; Cai, Yingfeng; Jiang, Haobin; Sun, Xiaoqiang

    2018-01-01

    Exact estimation of longitudinal force and sideslip angle is important for lateral stability and path-following control of four-wheel independent driven electric vehicle. This paper presents an effective method for longitudinal force and sideslip angle estimation by observer iteration and information fusion for four-wheel independent drive electric vehicles. The electric driving wheel model is introduced into the vehicle modeling process and used for longitudinal force estimation, the longitudinal force reconstruction equation is obtained via model decoupling, the a Luenberger observer and high-order sliding mode observer are united for longitudinal force observer design, and the Kalman filter is applied to restrain the influence of noise. Via the estimated longitudinal force, an estimation strategy is then proposed based on observer iteration and information fusion, in which the Luenberger observer is applied to achieve the transcendental estimation utilizing less sensor measurements, the extended Kalman filter is used for a posteriori estimation with higher accuracy, and a fuzzy weight controller is used to enhance the adaptive ability of observer system. Simulations and experiments are carried out, and the effectiveness of proposed estimation method is verified. PMID:29677124

  14. Independent contrasts and PGLS regression estimators are equivalent.

    PubMed

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  15. Estimating biodiversity of fungi in activated sludge communities using culture-independent methods.

    PubMed

    Evans, Tegan N; Seviour, Robert J

    2012-05-01

    Fungal diversity of communities in several activated sludge plants treating different influent wastes was determined by comparative sequence analyses of their 18S rRNA genes. Methods for DNA extraction and choice of primers for PCR amplification were both optimised using denaturing gradient gel electrophoresis profile patterns. Phylogenetic analysis revealed that the levels of fungal biodiversity in some communities, like those treating paper pulp wastes, were low, and most of the fungi detected in all communities examined were novel uncultured representatives of the major fungal subdivisions, in particular, the newly described clade Cryptomycota. The fungal populations in activated sludge revealed by these culture-independent methods were markedly different to those based on culture-dependent data. Members of the genera Penicillium, Cladosporium, Aspergillus and Mucor, which have been commonly identified in mixed liquor, were not identified in any of these plant communities. Non-fungal eukaryotic 18S rRNA genes were also amplified with the primer sets used. This is the first report where culture-independent methods have been applied to flocculated activated sludge biomass samples to estimate fungal community composition and, as expected, the data obtained gave a markedly different view of their population biodiversity compared to that based on culture-dependent methods.

  16. Space Programs: Nasa’s Independent Cost Estimating Capability Needs Improvement

    DTIC Science & Technology

    1992-11-01

    AD--A2?t59 263 DTJC 93-01281 I I !:ig’ i ~I1 V:II oz ’~ -A e•, 2.JQ For United States NTISAO General Accounting Office Wto faB Washington, D.C...advisory committee’s recommendation to strengthen NASA’s independent cost estimating capability. Congress and the executive branch need accurate cost ...estimates in deciding whether to undertake or continue space programs which often cost millions or even billions of dollars. In December 1990, the

  17. Efficient mental workload estimation using task-independent EEG features.

    PubMed

    Roy, R N; Charbonnier, S; Campagne, A; Bonnet, S

    2016-04-01

    Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators.  This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man's vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.

  18. Efficient mental workload estimation using task-independent EEG features

    NASA Astrophysics Data System (ADS)

    Roy, R. N.; Charbonnier, S.; Campagne, A.; Bonnet, S.

    2016-04-01

    Objective. Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. Approach. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man’s vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Main results. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. Significance. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.

  19. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  20. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  1. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  2. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  3. Estimating the number of motor units using random sums with independently thinned terms.

    PubMed

    Müller, Samuel; Conforto, Adriana Bastos; Z'graggen, Werner J; Kaelin-Lang, Alain

    2006-07-01

    The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.

  4. Stability of individual loudness functions obtained by magnitude estimation and production

    NASA Technical Reports Server (NTRS)

    Hellman, R. P.

    1981-01-01

    A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.

  5. A general probabilistic model for group independent component analysis and its estimation methods

    PubMed Central

    Guo, Ying

    2012-01-01

    SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789

  6. Redshift-Independent Distances in the NASA/IPAC Extragalactic Database Surpass 166,000 Estimates for 77,000 Galaxies

    NASA Astrophysics Data System (ADS)

    Steer, Ian

    2017-01-01

    Redshift-independent extragalactic distance estimates are used by researchers to establish the extragalactic distance scale, to underpin estimates of the Hubble constant, and to study peculiar velocities induced by gravitational attractions that perturb the motions of galaxies with respect to the “Hubble flow” of universal expansion. In 2006, the NASA/IPAC Extragalactic Database (NED) began providing users with a comprehensive tabulation of the redshift-independent extragalactic distance estimates published in the astronomical literature since 1980. A decade later, this compendium of distances (NED-D) surpassed 100,000 estimates for 28,000 galaxies, as reported in our recent journal article (Steer et al. 2016). Here, we are pleased to report NED-D has surpassed 166,000 distance estimates for 77,000 galaxies. Visualizations of the growth in data and of the statistical distributions of the most used distance indicators will be presented, along with an overview of the new data responsible for the most recent growth. We conclude with an outline of NED’s current plans to facilitate extragalactic research further by making greater use of redshift-independent distances. Additional information about other extensive updates to NED is presented at this meeting by Mazzarella et al. (2017). NED is operated by and this research is funded by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  7. Estimates of brown bear abundance on Kodiak Island, Alaska

    USGS Publications Warehouse

    Barnes, V.G.; Smith, R.B.

    1998-01-01

    During 1987-94 we used capture-mark-resight (CMR) methodology and rates of observation (bears/hour and bears/100 km2) of unmarked brown bears (Ursus arctos middendorffi) during intensive aerial surveys (IAS) to estimate abundance of brown bears on Kodiak Island and to establish a baseline for monitoring population trends. CMR estimates were obtained on 3 study areas; density ranged from 216-234 bears/1,000 km2 for independent animals and 292-342 bears/1,000 km2 including dependent offspring. Rates of observation during IAS ranged from 1.4-5.4 independent bears/hour and 2.9-18.0 independent bears/100 km2. Density estimates for independent bears on each IAS area were obtained by dividing mean number of bears observed during replicate surveys by estimated sightability (based on CMR-derived sightability in areas with similar habitat. Brown bear abundance on 21 geographic units of Kodiak Island and 3 nearby islands was estimated by extrapolation from CMR and IAS data using comparisons of habitat characteristics and sport harvest information. Population estimates for independent and total bears were 1,800 and 2,600. The CMR and IAS procedures offer alternative means, depending on management objective and available resources, of measuring population trend of brown bears on Kodiak Island.

  8. Obtaining Cue Rate Estimates for Some Mysticete Species using Existing Data

    DTIC Science & Technology

    2014-09-30

    primary focus is to obtain cue rates for humpback whales (Megaptera novaeangliae) off the California coast and on the PMRF range. To our knowledge, no... humpback whale cue rates have been calculated for these populations. Once a cue rate is estimated for the populations of humpback whales off the...rates for humpback whales on breeding grounds, in addition to average cue rates for other species of mysticete whales . Cue rates of several other

  9. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  10. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  11. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part II: Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2006-01-01

    Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated

  12. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 2; Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2004-01-01

    Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar

  13. Notification: Preliminary Research: Review of Independent Government Cost Estimates and Indirect Costs for EPA’s Interagency Agreements

    EPA Pesticide Factsheets

    Project #OA-FY14-0130, February 11, 2014. The EPA OIG plans to begin preliminary research of the independent government cost estimates and indirect costs for the EPA's funds-in interagency agreements.

  14. Low-complexity and modulation-format-independent carrier phase estimation scheme using linear approximation for elastic optical networks

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Chen, Xue; Shi, Sheping; Sun, Erkun; Shi, Chen

    2018-03-01

    We propose a low-complexity and modulation-format-independent carrier phase estimation (CPE) scheme based on two-stage modified blind phase search (MBPS) with linear approximation to compensate the phase noise of arbitrary m-ary quadrature amplitude modulation (m-QAM) signals in elastic optical networks (EONs). Comprehensive numerical simulations are carried out in the case that the highest possible modulation format in EONs is 256-QAM. The simulation results not only verify its advantages of higher estimation accuracy and modulation-format independence, i.e., universality, but also demonstrate that the implementation complexity is significantly reduced by at least one-fourth in comparison with the traditional BPS scheme. In addition, the proposed scheme shows similar laser linewidth tolerance with the traditional BPS scheme. The slightly better OSNR performance of the scheme is also experimentally validated for PM-QPSK and PM-16QAM systems, respectively. The coexistent advantages of low-complexity and modulation-format-independence could make the proposed scheme an attractive candidate for flexible receiver-side DSP unit in EONs.

  15. Use of NMR logging to obtain estimates of hydraulic conductivity in the High Plains aquifer, Nebraska, USA

    USGS Publications Warehouse

    Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John

    2013-01-01

    Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.

  16. Maximum ikelihood estimation for the double-count method with independent observers

    USGS Publications Warehouse

    Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.

    1996-01-01

    Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.

  17. Influence of tire dynamics on slip ratio estimation of independent driving wheel system

    NASA Astrophysics Data System (ADS)

    Li, Jianqiu; Song, Ziyou; Wei, Yintao; Ouyang, Minggao

    2014-11-01

    The independent driving wheel system, which is composed of in-wheel permanent magnet synchronous motor(I-PMSM) and tire, is more convenient to estimate the slip ratio because the rotary speed of the rotor can be accurately measured. However, the ring speed of the tire ring doesn't equal to the rotor speed considering the tire deformation. For this reason, a deformable tire and a detailed I-PMSM are modeled by using Matlab/Simulink. Moreover, the tire/road contact interface(a slippery road) is accurately described by the non-linear relaxation length-based model and the Magic Formula pragmatic model. Based on the relatively accurate model, the error of slip ratio estimated by the rotor rotary speed is analyzed in both time and frequency domains when a quarter car is started by the I-PMSM with a definite target torque input curve. In addition, the natural frequencies(NFs) of the driving wheel system with variable parameters are illustrated to present the relationship between the slip ratio estimation error and the NF. According to this relationship, a low-pass filter, whose cut-off frequency corresponds to the NF, is proposed to eliminate the error in the estimated slip ratio. The analysis, concerning the effect of the driving wheel parameters and road conditions on slip ratio estimation, shows that the peak estimation error can be reduced up to 75% when the LPF is adopted. The robustness and effectiveness of the LPF are therefore validated. This paper builds up the deformable tire model and the detailed I-PMSM models, and analyzes the effect of the driving wheel parameters and road conditions on slip ratio estimation.

  18. MO-E-17A-08: Attenuation-Based Size Adjusted, Scanner-Independent Organ Dose Estimates for Head CT Exams: TG 204 for Head CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMillan, K; Bostani, M; Cagnon, C

    Purpose: AAPM Task Group 204 described size specific dose estimates (SSDE) for body scans. The purpose of this work is to use a similar approach to develop patient-specific, scanner-independent organ dose estimates for head CT exams using an attenuation-based size metric. Methods: For eight patient models from the GSF family of voxelized phantoms, dose to brain and lens of the eye was estimated using Monte Carlo simulations of contiguous axial scans for 64-slice MDCT scanners from four major manufacturers. Organ doses were normalized by scannerspecific 16 cm CTDIvol values and averaged across all scanners to obtain scanner-independent CTDIvol-to-organ-dose conversion coefficientsmore » for each patient model. Head size was measured at the first slice superior to the eyes; patient perimeter and effective diameter (ED) were measured directly from the GSF data. Because the GSF models use organ identification codes instead of Hounsfield units, water equivalent diameter (WED) was estimated indirectly. Using the image data from 42 patients ranging from 2 weeks old to adult, the perimeter, ED and WED size metrics were obtained and correlations between each metric were established. Applying these correlations to the GSF perimeter and ED measurements, WED was calculated for each model. The relationship between the various patient size metrics and CTDIvol-to-organ-dose conversion coefficients was then described. Results: The analysis of patient images demonstrated the correlation between WED and ED across a wide range of patient sizes. When applied to the GSF patient models, an exponential relationship between CTDIvol-to-organ-dose conversion coefficients and the WED size metric was observed with correlation coefficients of 0.93 and 0.77 for the brain and lens of the eye, respectively. Conclusion: Strong correlation exists between CTDIvol normalized brain dose and WED. For the lens of the eye, a lower correlation is observed, primarily due to surface dose variations

  19. Analytical estimation show low depth-independent water loss due to vapor flux from deep aquifers

    NASA Astrophysics Data System (ADS)

    Selker, John S.

    2017-06-01

    Recent articles have provided estimates of evaporative flux from water tables in deserts that span 5 orders of magnitude. In this paper, we present an analytical calculation that indicates aquifer vapor flux to be limited to 0.01 mm/yr for sites where there is negligible recharge and the water table is well over 20 m below the surface. This value arises from the geothermal gradient, and therefore, is nearly independent of the actual depth of the aquifer. The value is in agreement with several numerical studies, but is 500 times lower than recently reported experimental values, and 100 times larger than an earlier analytical estimate.

  20. Independent-Trajectory Thermodynamic Integration: a practical guide to protein-drug binding free energy calculations using distributed computing.

    PubMed

    Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew

    2012-01-01

    The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.

  1. Accounting for non-independent detection when estimating abundance of organisms with a Bayesian approach

    USGS Publications Warehouse

    Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth

    2011-01-01

    Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial

  2. Estimation of longitudinal force, lateral vehicle speed and yaw rate for four-wheel independent driven electric vehicles

    NASA Astrophysics Data System (ADS)

    Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong

    2018-02-01

    Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.

  3. Independent Pixel and Two Dimensional Estimates of LANDSAT-Derived Cloud Field Albedo

    NASA Technical Reports Server (NTRS)

    Chambers, L. H.; Wielicki, Bruce A.; Evans, K. F.

    1996-01-01

    A theoretical study has been conducted on the effects of cloud horizontal inhomogeneity on cloud albedo bias. A two-dimensional (2D) version of the Spherical Harmonic Discrete Ordinate Method (SHDOM) is used to estimate the albedo bias of the plane parallel (PP-IPA) and independent pixel (IPA-2D) approximations for a wide range of 2D cloud fields obtained from LANDSAT. They include single layer trade cumulus, open and closed cell broken stratocumulus, and solid stratocumulus boundary layer cloud fields over ocean. Findings are presented on a variety of averaging scales and are summarized as a function of cloud fraction, mean cloud optical depth, cloud aspect ratio, standard deviation of optical depth, and the gamma function parameter Y (a measure of the width of the optical depth distribution). Biases are found to be small for small cloud fraction or mean optical depth, where the cloud fields under study behave linearly. They are large (up to 0.20 for PP-IPA bias, -0.12 for IPA-2D bias) for large v. On a scene average basis PP-IPA bias can reach 0.30, while IPA-2D bias reaches its largest magnitude at -0.07. Biases due to horizontal transport (IPA-2D) are much smaller than PP-IPA biases but account for 20% RMS of the bias overall. Limitations of this work include the particular cloud field set used, assumptions of conservative scattering, constant cloud droplet size, no gas absorption or surface reflectance, and restriction to 2D radiative transport. The LANDSAT data used may also be affected by radiative smoothing.

  4. Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Haisheng; Xu, Rui; Chen, Huaping

    2018-04-01

    To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.

  5. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  6. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  7. Age Estimation of Infants Through Metric Analysis of Developing Anterior Deciduous Teeth.

    PubMed

    Viciano, Joan; De Luca, Stefano; Irurita, Javier; Alemán, Inmaculada

    2018-01-01

    This study provides regression equations for estimation of age of infants from the dimensions of their developing deciduous teeth. The sample comprises 97 individuals of known sex and age (62 boys, 35 girls), aged between 2 days and 1,081 days. The age-estimation equations were obtained for the sexes combined, as well as for each sex separately, thus including "sex" as an independent variable. The values of the correlations and determination coefficients obtained for each regression equation indicate good fits for most of the equations obtained. The "sex" factor was statistically significant when included as an independent variable in seven of the regression equations. However, the "sex" factor provided an advantage for age estimation in only three of the equations, compared to those that did not include "sex" as a factor. These data suggest that the ages of infants can be accurately estimated from measurements of their developing deciduous teeth. © 2017 American Academy of Forensic Sciences.

  8. Biomass estimators for thinned second-growth ponderosa pine trees.

    Treesearch

    P.H. Cochran; J.W. Jennings; C.T. Youngberg

    1984-01-01

    Usable estimates of the mass of live foliage and limbs of sapling and pole-sized ponderosa pine in managed stands in central Oregon can be obtained with equations using the logarithm of diameter as the only independent variable. These equations produce only slightly higher root mean square deviations than equations that include additional independent variables. A...

  9. The feasibility of a scanner-independent technique to estimate organ dose from MDCT scans: Using CTDI{sub vol} to account for differences between scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Adam C.; Zankl, Maria; DeMarco, John J.

    2010-04-15

    Purpose: Monte Carlo radiation transport techniques have made it possible to accurately estimate the radiation dose to radiosensitive organs in patient models from scans performed with modern multidetector row computed tomography (MDCT) scanners. However, there is considerable variation in organ doses across scanners, even when similar acquisition conditions are used. The purpose of this study was to investigate the feasibility of a technique to estimate organ doses that would be scanner independent. This was accomplished by assessing the ability of CTDI{sub vol} measurements to account for differences in MDCT scanners that lead to organ dose differences. Methods: Monte Carlo simulationsmore » of 64-slice MDCT scanners from each of the four major manufacturers were performed. An adult female patient model from the GSF family of voxelized phantoms was used in which all ICRP Publication 103 radiosensitive organs were identified. A 120 kVp, full-body helical scan with a pitch of 1 was simulated for each scanner using similar scan protocols across scanners. From each simulated scan, the radiation dose to each organ was obtained on a per mA s basis (mGy/mA s). In addition, CTDI{sub vol} values were obtained from each scanner for the selected scan parameters. Then, to demonstrate the feasibility of generating organ dose estimates from scanner-independent coefficients, the simulated organ dose values resulting from each scanner were normalized by the CTDI{sub vol} value for those acquisition conditions. Results: CTDI{sub vol} values across scanners showed considerable variation as the coefficient of variation (CoV) across scanners was 34.1%. The simulated patient scans also demonstrated considerable differences in organ dose values, which varied by up to a factor of approximately 2 between some of the scanners. The CoV across scanners for the simulated organ doses ranged from 26.7% (for the adrenals) to 37.7% (for the thyroid), with a mean CoV of 31.5% across all organs

  10. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  11. Analysis of Culture-Dependent versus Culture-Independent Techniques for Identification of Bacteria in Clinically Obtained Bronchoalveolar Lavage Fluid

    PubMed Central

    Dickson, Robert P.; Erb-Downward, John R.; Prescott, Hallie C.; Martinez, Fernando J.; Curtis, Jeffrey L.; Lama, Vibha N.

    2014-01-01

    The diagnosis and management of pneumonia are limited by the use of culture-based techniques of microbial identification, which may fail to identify unculturable, fastidious, and metabolically active viable but unculturable bacteria. Novel high-throughput culture-independent techniques hold promise but have not been systematically compared to conventional culture. We analyzed 46 clinically obtained bronchoalveolar lavage (BAL) fluid specimens from symptomatic and asymptomatic lung transplant recipients both by culture (using a clinical microbiology laboratory protocol) and by bacterial 16S rRNA gene pyrosequencing. Bacteria were identified in 44 of 46 (95.7%) BAL fluid specimens by culture-independent sequencing, significantly more than the number of specimens in which bacteria were detected (37 of 46, 80.4%, P ≤ 0.05) or “pathogen” species reported (18 of 46, 39.1%, P ≤ 0.0001) via culture. Identification of bacteria by culture was positively associated with culture-independent indices of infection (total bacterial DNA burden and low bacterial community diversity) (P ≤ 0.01). In BAL fluid specimens with no culture growth, the amount of bacterial DNA was greater than that in reagent and rinse controls, and communities were markedly dominated by select Gammaproteobacteria, notably Escherichia species and Pseudomonas fluorescens. Culture growth above the threshold of 104 CFU/ml was correlated with increased bacterial DNA burden (P < 0.01), decreased community diversity (P < 0.05), and increased relative abundance of Pseudomonas aeruginosa (P < 0.001). We present two case studies in which culture-independent techniques identified a respiratory pathogen missed by culture and clarified whether a cultured “oral flora” species represented a state of acute infection. In summary, we found that bacterial culture of BAL fluid is largely effective in discriminating acute infection from its absence and identified some specific limitations of BAL fluid culture in

  12. Analysis of culture-dependent versus culture-independent techniques for identification of bacteria in clinically obtained bronchoalveolar lavage fluid.

    PubMed

    Dickson, Robert P; Erb-Downward, John R; Prescott, Hallie C; Martinez, Fernando J; Curtis, Jeffrey L; Lama, Vibha N; Huffnagle, Gary B

    2014-10-01

    The diagnosis and management of pneumonia are limited by the use of culture-based techniques of microbial identification, which may fail to identify unculturable, fastidious, and metabolically active viable but unculturable bacteria. Novel high-throughput culture-independent techniques hold promise but have not been systematically compared to conventional culture. We analyzed 46 clinically obtained bronchoalveolar lavage (BAL) fluid specimens from symptomatic and asymptomatic lung transplant recipients both by culture (using a clinical microbiology laboratory protocol) and by bacterial 16S rRNA gene pyrosequencing. Bacteria were identified in 44 of 46 (95.7%) BAL fluid specimens by culture-independent sequencing, significantly more than the number of specimens in which bacteria were detected (37 of 46, 80.4%, P ≤ 0.05) or "pathogen" species reported (18 of 46, 39.1%, P ≤ 0.0001) via culture. Identification of bacteria by culture was positively associated with culture-independent indices of infection (total bacterial DNA burden and low bacterial community diversity) (P ≤ 0.01). In BAL fluid specimens with no culture growth, the amount of bacterial DNA was greater than that in reagent and rinse controls, and communities were markedly dominated by select Gammaproteobacteria, notably Escherichia species and Pseudomonas fluorescens. Culture growth above the threshold of 10(4) CFU/ml was correlated with increased bacterial DNA burden (P < 0.01), decreased community diversity (P < 0.05), and increased relative abundance of Pseudomonas aeruginosa (P < 0.001). We present two case studies in which culture-independent techniques identified a respiratory pathogen missed by culture and clarified whether a cultured "oral flora" species represented a state of acute infection. In summary, we found that bacterial culture of BAL fluid is largely effective in discriminating acute infection from its absence and identified some specific limitations of BAL fluid culture in the

  13. Effect of windowing on lithosphere elastic thickness estimates obtained via the coherence method: Results from northern South America

    NASA Astrophysics Data System (ADS)

    Ojeda, GermáN. Y.; Whitman, Dean

    2002-11-01

    The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.

  14. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  15. Proteinuria and Reduced Estimated Glomerular Filtration Rate Independently Predict Risk for Acute Myocardial Infarction: Findings from a Population-Based Study in Keelung, Taiwan.

    PubMed

    Chang, Shu-Hsuan; Tsai, Chia-Ti; Yen, Amy Ming-Fang; Lei, Meng-Huan; Chen, Hsiu-Hsi; Tseng, Chuen-Den

    2015-03-01

    The aim of this study was to evaluate the independent roles of proteinuria and reduced estimated glomerular filtration rate (GFR) in the development of acute myocardial infarction in a northern Taiwanese population. We conducted a community-based prospective cohort study in Keelung, the northernmost county of Taiwan. A total of 63,129 subjects (63% women) ≥ 20 years of age who had no history of coronary heart disease were recruited and followed-up. Univariate and multivariate proportional hazards regression analysis was performed to assess the association between proteinuria and estimated GFR and the risk of acute myocardial infarction. There were 305 new cases of acute myocardial infarction (114 women and 191 men) documented during a four-year follow-up period. After adjustment of potential confounding covariates, heavier proteinuria (dipstick urinalysis reading 3+) and estimated GFR of less than 60 ml/min/1.73 m(2) independently predicted increased risk of developing acute myocardial infarction. The adjusted hazard ratio (aHR) of heavier proteinuria for occurrence of acute myocardial infarction was 1.85 [95% confidence intervals (CI), 1.17-2.91, p < 0.01] (vs. the reference group: negative dipstick proteinuria). The aHR of estimated GFR of 30-59 ml/min/1.73 m(2) for occurrence of acute myocardial infarction was 2.4 (95% CI, 1.31-4.38, p < 0.01) (vs. the reference group: estimated GFR ≥ 90 ml/ min/1.73 m(2)), and that of estimated GFR of 15-29 ml/min/1.73 m(2) was 5.26 (95% CI, 2.26-12.26, p < 0.01). We demonstrated that both heavier proteinuria and lower estimated GFR are significant independent predictors of developing future acute myocardial infarction in a northern Taiwanese population. Acute myocardial infarction; Estimated glomerular filtration rate; Proteinuria.

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  17. Challenges in Obtaining Estimates of the Risk of Tuberculosis Infection During Overseas Deployment.

    PubMed

    Mancuso, James D; Geurts, Mia

    2015-12-01

    Estimates of the risk of tuberculosis (TB) infection resulting from overseas deployment among U.S. military service members have varied widely, and have been plagued by methodological problems. The purpose of this study was to estimate the incidence of TB infection in the U.S. military resulting from deployment. Three populations were examined: 1) a unit of 2,228 soldiers redeploying from Iraq in 2008, 2) a cohort of 1,978 soldiers followed up over 5 years after basic training at Fort Jackson in 2009, and 3) 6,062 participants in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The risk of TB infection in the deployed population was low-0.6% (95% confidence interval [CI]: 0.1-2.3%)-and was similar to the non-deployed population. The prevalence of latent TB infection (LTBI) in the U.S. population was not significantly different among deployed and non-deployed veterans and those with no military service. The limitations of these retrospective studies highlight the challenge in obtaining valid estimates of risk using retrospective data and the need for a more definitive study. Similar to civilian long-term travelers, risks for TB infection during deployment are focal in nature, and testing should be targeted to only those at increased risk. © The American Society of Tropical Medicine and Hygiene.

  18. Fishery-independent surface abundance and density estimates of swordfish (Xiphias gladius) from aerial surveys in the Central Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Lauriano, Giancarlo; Pierantonio, Nino; Kell, Laurence; Cañadas, Ana; Donovan, Gregory; Panigada, Simone

    2017-07-01

    Fishery-independent surface density and abundance estimates for the swordfish were obtained through aerial surveys carried out over a large portion of the Central Mediterranean, implementing distance sampling methodologies. Both design- and model-based abundance and density showed an uneven occurrence of the species throughout the study area, with clusters of higher density occurring near converging fronts, strong thermoclines and/or underwater features. The surface abundance was estimated for the Pelagos Sanctuary for Mediterranean Marine Mammals in the summer of 2009 (n=1152; 95%CI=669.0-1981.0; %CV=27.64), the Sea of Sardinia, the Pelagos Sanctuary and the Central Tyrrhenian Sea for the summer of 2010 (n=3401; 95%CI=2067.0-5596.0; %CV=25.51), and for the Southern Tyrrhenian Sea during the winter months of 2010-2011 (n=1228; 95%CI=578-2605; %CV=38.59). The Mediterranean swordfish stock deserves special attention in light of the heavy fishing pressures. Furthermore, the unreliability of fishery-related data has, to date, hampered our ability to effectively inform long-term conservation in the Mediterranean Region. Considering that the European countries have committed to protect the resources and all the marine-related economic and social dynamics upon which they depend, the information presented here constitute useful data towards the international legal requirements under the Marine Strategy Framework Directory, the Common Fisheries Policy, the Habitats and Species Directive and the Directive on Maritime Spatial Planning, among the others.

  19. Model independent constraints on transition redshift

    NASA Astrophysics Data System (ADS)

    Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.

    2018-05-01

    This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.

  20. Human papillomavirus (HPV) vaccination coverage in young Australian women is higher than previously estimated: independent estimates from a nationally representative mobile phone survey.

    PubMed

    Brotherton, Julia M L; Liu, Bette; Donovan, Basil; Kaldor, John M; Saville, Marion

    2014-01-23

    Accurate estimates of coverage are essential for estimating the population effectiveness of human papillomavirus (HPV) vaccination. Australia has a purpose built National HPV Vaccination Program Register for monitoring coverage, however notification of doses administered to young women in the community during the national catch-up program (2007-2009) was not compulsory. In 2011, we undertook a population-based mobile phone survey of young women to independently estimate HPV vaccination coverage. Randomly generated mobile phone numbers were dialed to recruit women aged 22-30 (age eligible for HPV vaccination) to complete a computer assisted telephone interview. Consent was sought to validate self reported HPV vaccination status against the national register. Coverage rates were calculated based on self report and weighted to the age and state of residence structure of the Australian female population. These were compared with coverage estimates from the register using Australian Bureau of Statistics estimated resident populations as the denominator. Among the 1379 participants, the national estimate for self reported HPV vaccination coverage for doses 1/2/3, respectively, weighted for age and state of residence, was 64/59/53%. This compares with coverage of 55/45/32% and 49/40/28% based on register records, using 2007 and 2011 population data as the denominators respectively. Some significant differences in coverage between the states were identified. 20% (223) of women returned a consent form allowing validation of doses against the register and provider records: among these women 85.6% (538) of self reported doses were confirmed. We confirmed that coverage rates for young women vaccinated in the community (at age 18-26 years) are underestimated by the national register and that under-notification is greater for second and third doses. Using 2011 population estimates, rather than estimates contemporaneous with the program rollout, reduces register-based coverage

  1. The first step toward genetic selection for host tolerance to infectious pathogens: obtaining the tolerance phenotype through group estimates

    PubMed Central

    Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias

    2012-01-01

    Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID

  2. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  3. Effects of Assuming Independent Component Failure Times, if They Actually Dependent, in a Series System.

    DTIC Science & Technology

    1984-10-26

    test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the

  4. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  5. Estimates of the solar internal angular velocity obtained with the Mt. Wilson 60-foot solar tower

    NASA Technical Reports Server (NTRS)

    Rhodes, Edward J., Jr.; Cacciani, Alessandro; Woodard, Martin; Tomczyk, Steven; Korzennik, Sylvain

    1987-01-01

    Estimates are obtained of the solar internal angular velocity from measurements of the frequency splittings of p-mode oscillations. A 16-day time series of full-disk Dopplergrams obtained during July and August 1984 at the 60-foot tower telescope of the Mt. Wilson Observatory is analyzed. Power spectra were computed for all of the zonal, tesseral, and sectoral p-modes from l = 0 to 89 and for all of the sectoral p-modes from l = 90 to 200. A mean power spectrum was calculated for each degree up to 89. The frequency differences of all of the different nonzonal modes were calculated for these mean power spectra.

  6. A comparison of U.S. Geological Survey three-dimensional model estimates of groundwater source areas and velocities to independently derived estimates, Idaho National Laboratory and vicinity, Idaho

    USGS Publications Warehouse

    Fisher, Jason C.; Rousseau, Joseph P.; Bartholomay, Roy C.; Rattray, Gordon W.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Energy, evaluated a three-dimensional model of groundwater flow in the fractured basalts and interbedded sediments of the eastern Snake River Plain aquifer at and near the Idaho National Laboratory to determine if model-derived estimates of groundwater movement are consistent with (1) results from previous studies on water chemistry type, (2) the geochemical mixing at an example well, and (3) independently derived estimates of the average linear groundwater velocity. Simulated steady-state flow fields were analyzed using backward particle-tracking simulations that were based on a modified version of the particle tracking program MODPATH. Model results were compared to the 5-microgram-per-liter lithium contour interpreted to represent the transition from a water type that is primarily composed of tributary valley underflow and streamflow-infiltration recharge to a water type primarily composed of regional aquifer water. This comparison indicates several shortcomings in the way the model represents flow in the aquifer. The eastward movement of tributary valley underflow and streamflow-infiltration recharge is overestimated in the north-central part of the model area and underestimated in the central part of the model area. Model inconsistencies can be attributed to large contrasts in hydraulic conductivity between hydrogeologic zones. Sources of water at well NPR-W01 were identified using backward particle tracking, and they were compared to the relative percentages of source water chemistry determined using geochemical mass balance and mixing models. The particle tracking results compare reasonably well with the chemistry results for groundwater derived from surface-water sources (-28 percent error), but overpredict the proportion of groundwater derived from regional aquifer water (108 percent error) and underpredict the proportion of groundwater derived from tributary valley underflow

  7. 30 CFR 45.3 - Identification of independent contractors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Identification of independent contractors. 45.3... ADMINISTRATIVE REQUIREMENTS INDEPENDENT CONTRACTORS § 45.3 Identification of independent contractors. (a) Any independent contractor may obtain a permanent MSHA identification number. To obtain an identification number...

  8. An independent determination of the local Hubble constant

    NASA Astrophysics Data System (ADS)

    Fernández Arenas, David; Terlevich, Elena; Terlevich, Roberto; Melnick, Jorge; Chávez, Ricardo; Bresolin, Fabio; Telles, Eduardo; Plionis, Manolis; Basilakos, Spyros

    2018-02-01

    The relationship between the integrated H β line luminosity and the velocity dispersion of the ionized gas of H II galaxies and giant H II regions represents an exciting standard candle that presently can be used up to redshifts z ˜ 4. Locally it is used to obtain precise measurements of the Hubble constant by combining the slope of the relation obtained from nearby (z ≤ 0.2) H II galaxies with the zero-point determined from giant H II regions belonging to an `anchor sample' of galaxies for which accurate redshift-independent distance moduli are available. We present new data for 36 giant H II regions in 13 galaxies of the anchor sample that includes the megamaser galaxy NGC 4258. Our data are the result of the first 4 yr of observation of our primary sample of 130 giant H II regions in 73 galaxies with Cepheid determined distances. Our best estimate of the Hubble parameter is 71.0 ± 2.8(random) ± 2.1(systematic) km s- 1Mpc- 1. This result is the product of an independent approach and, although at present less precise than the latest SNIa results, it is amenable to substantial improvement.

  9. Maternal versus Professional Estimates of Developmental Status for Young Children with Handicaps: An Ecological Approach.

    ERIC Educational Resources Information Center

    Sexton, David; And Others

    1990-01-01

    The study compared maternal judgments about the development of their young disabled children with independently obtained developmental testing data for 53 children. Results indicated (1) maternal and professional estimates were highly correlated; (2) mothers systematically provided higher estimates across developmental domains; and (3) child IQ…

  10. Demonstration of precise estimation of polar motion parameters with the global positioning system: Initial results

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1991-01-01

    Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.

  11. A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.

    PubMed

    Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena

    2012-08-01

    A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.

  12. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  14. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  15. Estimation of two ordered mean residual lifetime functions.

    PubMed

    Ebrahimi, N

    1993-06-01

    In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.

  16. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  17. Obtaining Parts

    Science.gov Websites

    The Cosmic Connection Parts for the Berkeley Detector Suppliers: Scintillator Eljen Technology 1 obtain the components needed to build the Berkeley Detector. These companies have helped previous the last update. He estimates that the cost to build a detector varies from $1500 to $2700 depending

  18. Efficient bootstrap estimates for tail statistics

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Aarnes, Ole Johan

    2017-03-01

    Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.

  19. A Method of Estimating Low Turbulence Levels in Near Real Time Using Laser Anemometry

    NASA Technical Reports Server (NTRS)

    Goldman, Louis J.; Seasholtz, Richard G.

    2004-01-01

    Laser anemometry was used to make two independent measurements of the flow velocity by capturing individual Doppler signals with high-speed digitizing boards. The two independent measurements were cross-correlated to reduce the contribution of photo detector shot noise on the frequency determination and subsequently on the turbulence estimate. In addition, criteria were developed to eliminate "bad" Doppler bursts from the data set, which then allowed reasonable low turbulence estimates to be made. The laser anemometer measurements were obtained at the inlet of an annular cascade and at the exit of a flow calibration nozzle and were compared with hot-wire data.

  20. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    NASA Astrophysics Data System (ADS)

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-08-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  1. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    USGS Publications Warehouse

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-01-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  2. Model-independent Constraints on Cosmic Curvature and Opacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guo-Jian; Li, Zheng-Xiang; Xia, Jun-Qing

    2017-09-20

    In this paper, we propose to estimate the spatial curvature of the universe and the cosmic opacity in a model-independent way with expansion rate measurements, H ( z ), and type Ia supernova (SNe Ia). On the one hand, using a nonparametric smoothing method Gaussian process, we reconstruct a function H ( z ) from opacity-free expansion rate measurements. Then, we integrate the H ( z ) to obtain distance modulus μ {sub H}, which is dependent on the cosmic curvature. On the other hand, distances of SNe Ia can be determined by their photometric observations and thus are opacity-dependent.more » In our analysis, by confronting distance moduli μ {sub H} with those obtained from SNe Ia, we achieve estimations for both the spatial curvature and the cosmic opacity without any assumptions for the cosmological model. Here, it should be noted that light curve fitting parameters, accounting for the distance estimation of SNe Ia, are determined in a global fit together with the cosmic opacity and spatial curvature to get rid of the dependence of these parameters on cosmology. In addition, we also investigate whether the inclusion of different priors for the present expansion rate ( H {sub 0}: global estimation, 67.74 ± 0.46 km s{sup −1} Mpc{sup −1}, and local measurement, 73.24 ± 1.74 km s{sup −1} Mpc{sup −1}) exert influence on the reconstructed H ( z ) and the following estimations of the spatial curvature and cosmic opacity. Results show that, in general, a spatially flat and transparent universe is preferred by the observations. Moreover, it is suggested that priors for H {sub 0} matter a lot. Finally, we find that there is a strong degeneracy between the curvature and the opacity.« less

  3. 45 CFR 1309.44 - Independent analysis.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Independent analysis. 1309.44 Section 1309.44... § 1309.44 Independent analysis. (a) The responsible HHS official may direct the grantee applying for funds to acquire or make major renovations to a facility to obtain an independent analysis of the cost...

  4. 45 CFR 1309.44 - Independent analysis.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false Independent analysis. 1309.44 Section 1309.44... § 1309.44 Independent analysis. (a) The responsible HHS official may direct the grantee applying for funds to acquire or make major renovations to a facility to obtain an independent analysis of the cost...

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  6. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    PubMed

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  7. Motion estimation in the frequency domain using fuzzy c-planes clustering.

    PubMed

    Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E

    2001-01-01

    A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method.

  8. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  9. Independent and combined analyses of sequences from all three genomic compartments converge on the root of flowering plant phylogeny

    PubMed Central

    Barkman, Todd J.; Chenery, Gordon; McNeal, Joel R.; Lyons-Weiler, James; Ellisens, Wayne J.; Moore, Gerry; Wolfe, Andrea D.; dePamphilis, Claude W.

    2000-01-01

    Plant phylogenetic estimates are most likely to be reliable when congruent evidence is obtained independently from the mitochondrial, plastid, and nuclear genomes with all methods of analysis. Here, results are presented from separate and combined genomic analyses of new and previously published data, including six and nine genes (8,911 bp and 12,010 bp, respectively) for different subsets of taxa that suggest Amborella + Nymphaeales (water lilies) are the first-branching angiosperm lineage. Before and after tree-independent noise reduction, most individual genomic compartments and methods of analysis estimated the Amborella + Nymphaeales basal topology with high support. Previous phylogenetic estimates placing Amborella alone as the first extant angiosperm branch may have been misled because of a series of specific problems with paralogy, suboptimal outgroups, long-branch taxa, and method dependence. Ancestral character state reconstructions differ between the two topologies and affect inferences about the features of early angiosperms. PMID:11069280

  10. 32 CFR 169a.16 - Independent review.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Independent review. 169a.16 Section 169a.16 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Procedures § 169a.16 Independent review. (a) The estimates of in-house and...

  11. 32 CFR 169a.16 - Independent review.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Independent review. 169a.16 Section 169a.16 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Procedures § 169a.16 Independent review. (a) The estimates of in-house and...

  12. 32 CFR 169a.16 - Independent review.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Independent review. 169a.16 Section 169a.16 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Procedures § 169a.16 Independent review. (a) The estimates of in-house and...

  13. Flight Investigation of Prescribed Simultaneous Independent Surface Excitations for Real-Time Parameter Identification

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.

    2003-01-01

    Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.

  14. Greenhouse gases inventory and carbon balance of two dairy systems obtained from two methane-estimation methods.

    PubMed

    Cunha, C S; Lopes, N L; Veloso, C M; Jacovine, L A G; Tomich, T R; Pereira, L G R; Marcondes, M I

    2016-11-15

    The adoption of carbon inventories for dairy farms in tropical countries based on models developed from animals and diets of temperate climates is questionable. Thus, the objectives of this study were to estimate enteric methane (CH4) emissions through the SF6 tracer gas technique and through equations proposed by the Intergovernmental Panel on Climate Change (IPCC) Tier 2 and to calculate the inventory of greenhouse gas (GHG) emissions from two dairy systems. In addition, the carbon balance of these properties was estimated using enteric CH4 emissions obtained using both methodologies. In trial 1, the CH4 emissions were estimated from seven Holstein dairy cattle categories based on the SF6 tracer gas technique and on IPCC equations. The categories used in the study were prepubertal heifers (n=6); pubertal heifers (n=4); pregnant heifers (n=5); high-producing (n=6); medium-producing (n=5); low-producing (n=4) and dry cows (n=5). Enteric methane emission was higher for the category comprising prepubertal heifers when estimated by the equations proposed by the IPCC Tier 2. However, higher CH4 emissions were estimated by the SF6 technique in the categories including medium- and high-producing cows and dry cows. Pubertal heifers, pregnant heifers, and low-producing cows had equal CH4 emissions as estimated by both methods. In trial 2, two dairy farms were monitored for one year to identify all activities that contributed in any way to GHG emissions. The total emission from Farm 1 was 3.21t CO2e/animal/yr, of which 1.63t corresponded to enteric CH4. Farm 2 emitted 3.18t CO2e/animal/yr, with 1.70t of enteric CH4. IPCC estimations can underestimate CH4 emissions from some categories while overestimate others. However, considering the whole property, these discrepancies are offset and we would submit that the equations suggested by the IPCC properly estimate the total CH4 emission and carbon balance of the properties. Thus, the IPCC equations should be utilized with

  15. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  16. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  17. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for

  18. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  19. Finite-key analysis for measurement-device-independent quantum key distribution.

    PubMed

    Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2014-04-29

    Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.

  20. On the fluctuations of sums of independent random variables.

    PubMed

    Feller, W

    1969-07-01

    If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.

  1. Shock Formation Height in the Solar Corona Estimated from SDO and Radio Observations

    NASA Technical Reports Server (NTRS)

    Gopalswamy, N.; Nitta, N.

    2011-01-01

    Wave transients at EUV wavelengths and type II radio bursts are good indicators of shock formation in the solar corona. We use recent EUV wave observations from SDO and combine them with metric type II radio data to estimate the height in the corona where the shocks form. We compare the results with those obtained from other methods. We also estimate the shock formation heights independently using white-light observations of coronal mass ejections that ultimately drive the shocks.

  2. Comparison of Species Richness Estimates Obtained Using Nearly Complete Fragments and Simulated Pyrosequencing-Generated Fragments in 16S rRNA Gene-Based Environmental Surveys▿ †

    PubMed Central

    Youssef, Noha; Sheik, Cody S.; Krumholz, Lee R.; Najar, Fares Z.; Roe, Bruce A.; Elshahed, Mostafa S.

    2009-01-01

    Pyrosequencing-based 16S rRNA gene surveys are increasingly utilized to study highly diverse bacterial communities, with special emphasis on utilizing the large number of sequences obtained (tens to hundreds of thousands) for species richness estimation. However, it is not yet clear how the number of operational taxonomic units (OTUs) and, hence, species richness estimates determined using shorter fragments at different taxonomic cutoffs correlates with the number of OTUs assigned using longer, nearly complete 16S rRNA gene fragments. We constructed a 16S rRNA clone library from an undisturbed tallgrass prairie soil (1,132 clones) and used it to compare species richness estimates obtained using eight pyrosequencing candidate fragments (99 to 361 bp in length) and the nearly full-length fragment. Fragments encompassing the V1 and V2 (V1+V2) region and the V6 region (generated using primer pairs 8F-338R and 967F-1046R) overestimated species richness; fragments encompassing the V3, V7, and V7+V8 hypervariable regions (generated using primer pairs 338F-530R, 1046F-1220R, and 1046F-1392R) underestimated species richness; and fragments encompassing the V4, V5+V6, and V6+V7 regions (generated using primer pairs 530F-805R, 805F-1046R, and 967F-1220R) provided estimates comparable to those obtained with the nearly full-length fragment. These patterns were observed regardless of the alignment method utilized or the parameter used to gauge comparative levels of species richness (number of OTUs observed, slope of scatter plots of pairwise distance values for short and nearly complete fragments, and nonparametric and parametric species richness estimates). Similar results were obtained when analyzing three other datasets derived from soil, adult Zebrafish gut, and basaltic formations in the East Pacific Rise. Regression analysis indicated that these observed discrepancies in species richness estimates within various regions could readily be explained by the proportions of

  3. Brightness temperature - obtaining the physical properties of a non-equipartition plasma

    NASA Astrophysics Data System (ADS)

    Nokhrina, E. E.

    2017-06-01

    The limit on the intrinsic brightness temperature, attributed to `Compton catastrophe', has been established being 1012 K. Somewhat lower limit of the order of 1011.5 K is implied if we assume that the radiating plasma is in equipartition with the magnetic field - the idea that explained why the observed cores of active galactic nuclei (AGNs) sustained the limit lower than the `Compton catastrophe'. Recent observations with unprecedented high resolution by the RadioAstron have revealed systematic exceed in the observed brightness temperature. We propose means of estimating the degree of the non-equipartition regime in AGN cores. Coupled with the core-shift measurements, the method allows us to independently estimate the magnetic field strength and the particle number density at the core. We show that the ratio of magnetic energy to radiating plasma energy is of the order of 10-5, which means the flow in the core is dominated by the particle energy. We show that the magnetic field obtained by the brightness temperature measurements may be underestimated. We propose for the relativistic jets with small viewing angles the non-uniform magnetohydrodynamic model and obtain the expression for the magnetic field amplitude about two orders higher than that for the uniform model. These magnetic field amplitudes are consistent with the limiting magnetic field suggested by the `magnetically arrested disc' model.

  4. Estimation of brittleness indices for pay zone determination in a shale-gas reservoir by using elastic properties obtained from micromechanics

    NASA Astrophysics Data System (ADS)

    Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime

    2018-04-01

    The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.

  5. Magnetic nanoparticle temperature estimation.

    PubMed

    Weaver, John B; Rauwerdink, Adam M; Hansen, Eric W

    2009-05-01

    The authors present a method of measuring the temperature of magnetic nanoparticles that can be adapted to provide in vivo temperature maps. Many of the minimally invasive therapies that promise to reduce health care costs and improve patient outcomes heat tissue to very specific temperatures to be effective. Measurements are required because physiological cooling, primarily blood flow, makes the temperature difficult to predict a priori. The ratio of the fifth and third harmonics of the magnetization generated by magnetic nanoparticles in a sinusoidal field is used to generate a calibration curve and to subsequently estimate the temperature. The calibration curve is obtained by varying the amplitude of the sinusoidal field. The temperature can then be estimated from any subsequent measurement of the ratio. The accuracy was 0.3 degree K between 20 and 50 degrees C using the current apparatus and half-second measurements. The method is independent of nanoparticle concentration and nanoparticle size distribution.

  6. Contrasting academic and tobacco industry estimates of illicit cigarette trade: evidence from Warsaw, Poland.

    PubMed

    Stoklosa, Michal; Ross, Hana

    2014-05-01

    To compare two different methods for estimating the size of the illicit cigarette market with each other and to contrast the estimates obtained by these two methods with the results of an industry-commissioned study. We used two observational methods: collection of data from packs in smokers' personal possession, and collection of data from packs discarded on streets. The data were obtained in Warsaw, Poland in September 2011 and October 2011. We used tests of independence to compare the results based on the two methods, and to contrast those with the estimate from the industry-commissioned discarded pack collection conducted in September 2011. We found that the proportions of cigarette packs classified as not intended for the Polish market estimated by our two methods were not statistically different. These estimates were 14.6% (95% CI 10.8% to 19.4%) using the survey data (N=400) and 15.6% (95% CI 13.2% to 18.4%) using the discarded pack data (N=754). The industry estimate (22.9%) was higher by nearly a half compared with our estimates, and this difference is statistically significant. Our findings are consistent with previous evidence of the tobacco industry exaggerating the scope of illicit trade and with the general pattern of the industry manipulating evidence to mislead the debate on tobacco control policy in many countries. Collaboration between governments and the tobacco industry to estimate tobacco tax avoidance and evasion is likely to produce upward-biased estimates of illicit cigarette trade. If governments are presented with industry estimates, they should strictly require a disclosure of all methodological details and data used in generating these estimates, and should seek advice from independent experts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  8. Estimation of the rain signal in the presence of large surface clutter

    NASA Technical Reports Server (NTRS)

    Ahamad, Atiq; Moore, Richard K.

    1994-01-01

    The principal limitation for the use of a spaceborne imaging SAR as a rain radar is the surface-clutter problem. Signals may be estimated in the presence of noise by averaging large numbers of independent samples. This method was applied to obtain an estimate of the rain echo by averaging a set of N(sub c) samples of the clutter in a separate measurement and subtracting the clutter estimate from the combined estimate. The number of samples required for successful estimation (within 10-20%) for off-vertical angles of incidence appears to be prohibitively large. However, by appropriately degrading the resolution in both range and azimuth, the required number of samples can be obtained. For vertical incidence, the number of samples required for successful estimation is reasonable. In estimating the clutter it was assumed that the surface echo is the same outside the rain volume as it is within the rain volume. This may be true for the forest echo, but for convective storms over the ocean the surface echo outside the rain volume is very different from that within. It is suggested that the experiment be performed with vertical incidence over forest to overcome this limitation.

  9. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  10. Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gondolo, Paolo; Scopel, Stefano, E-mail: paolo.gondolo@utah.edu, E-mail: scopel@sogang.ac.kr

    2017-09-01

    We present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame from which the unmodulated signal descends. Then we describe a mathematically-sound profile likelihood analysis in which the likelihood is profiled over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysismore » to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background+signal rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.« less

  11. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  12. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  13. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  14. A two-step super-Gaussian independent component analysis approach for fMRI data.

    PubMed

    Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying

    2015-09-01

    Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.

  15. Estimated aortic stiffness is independently associated with cardiac baroreflex sensitivity in humans: role of ageing and habitual endurance exercise.

    PubMed

    Pierce, G L; Harris, S A; Seals, D R; Casey, D P; Barlow, P B; Stauss, H M

    2016-09-01

    We hypothesised that differences in cardiac baroreflex sensitivity (BRS) would be independently associated with aortic stiffness and augmentation index (AI), clinical biomarkers of cardiovascular disease risk, among young sedentary and middle-aged/older sedentary and endurance-trained adults. A total of 36 healthy middle-aged/older (age 55-76 years, n=22 sedentary and n=14 endurance-trained) and 5 young sedentary (age 18-31 years) adults were included in a cross-sectional study. A subset of the middle-aged/older sedentary adults (n=12) completed an 8-week-aerobic exercise intervention. Invasive brachial artery blood pressure waveforms were used to compute spontaneous cardiac BRS (via sequence technique), estimated aortic pulse wave velocity (PWV) and AI (AI, via brachial-aortic transfer function and wave separation analysis). In the cross-sectional study, cardiac BRS was 71% lower in older compared with young sedentary adults (P<0.05), but only 40% lower in older adults who performed habitual endurance exercise (P=0.03). In a regression model that included age, sex, resting heart rate, mean arterial pressure (MAP), body mass index and maximal exercise oxygen uptake, estimated aortic PWV (β±s.e.=-5.76±2.01, P=0.01) was the strongest predictor of BRS (model R(2)=0.59, P<0.001). The 8-week-exercise intervention improved BRS by 38% (P=0.04) and this change in BRS was associated with improved aortic PWV (r=-0.65, P=0.044, adjusted for changes in MAP). Age- and endurance-exercise-related differences in cardiac BRS are independently associated with corresponding alterations in aortic PWV among healthy adults, consistent with a mechanistic link between variations in the sensitivity of the baroreflex and aortic stiffness with age and exercise.

  16. Estimated Aortic Stiffness is Independently Associated with Cardiac Baroreflex Sensitivity in Humans: Role of Aging and Habitual Endurance Exercise

    PubMed Central

    Pierce, Gary L.; Harris, Stephen A.; Seals, Douglas R.; Casey, Darren P.; Barlow, Patrick B.; Stauss, Harald M.

    2016-01-01

    We hypothesized that differences in cardiac baroreflex sensitivity (BRS) would be independently associated with aortic stiffness and augmentation index (AI), clinical biomarkers of cardiovascular disease (CVD) risk, among young sedentary and middle-aged/older sedentary and endurance-trained adults. A total of 36 healthy middle-aged/older (age 55-76 years, n=22 sedentary; n=14 endurance-trained) and 5 young sedentary (age 18-31 years) adults were included in a cross-sectional study. A subset of the middle-aged/older sedentary adults (n=12) completed an 8-week aerobic exercise intervention. Invasive brachial artery blood pressure waveforms were used to compute spontaneous cardiac BRS (via sequence technique) and estimated aortic pulse wave velocity (PWV) and AI (AI, via brachial-aortic transfer function and wave separation analysis). In the cross-sectional study, cardiac BRS was 71% lower in older compared with young sedentary adults (P<0.05), but only 40% lower in older adults who performed habitual endurance exercise (P=0.03). In a regression model that included age, sex, resting heart rate, mean arterial pressure (MAP), body mass index and maximal exercise oxygen uptake, estimated aortic PWV (β±SE = −5.76 ± 2.01, P=0.01) was the strongest predictor of BRS (Model R2=0.59, P<0.001). The 8 week exercise intervention improved BRS by 38% (P=0.04) and this change in BRS was associated with improved aortic PWV (r=−0.65, P=0.044, adjusted for changes in MAP). Age- and endurance exercise-related differences in cardiac BRS are independently associated with corresponding alterations in aortic PWV among healthy adults, consistent with a mechanistic link between variations in the sensitivity of the baroreflex and aortic stiffness with age and exercise. PMID:26911535

  17. Nonpareil 3: Fast Estimation of Metagenomic Coverage and Sequence Diversity.

    PubMed

    Rodriguez-R, Luis M; Gunturu, Santosh; Tiedje, James M; Cole, James R; Konstantinidis, Konstantinos T

    2018-01-01

    Estimations of microbial community diversity based on metagenomic data sets are affected, often to an unknown degree, by biases derived from insufficient coverage and reference database-dependent estimations of diversity. For instance, the completeness of reference databases cannot be generally estimated since it depends on the extant diversity sampled to date, which, with the exception of a few habitats such as the human gut, remains severely undersampled. Further, estimation of the degree of coverage of a microbial community by a metagenomic data set is prohibitively time-consuming for large data sets, and coverage values may not be directly comparable between data sets obtained with different sequencing technologies. Here, we extend Nonpareil, a database-independent tool for the estimation of coverage in metagenomic data sets, to a high-performance computing implementation that scales up to hundreds of cores and includes, in addition, a k -mer-based estimation as sensitive as the original alignment-based version but about three hundred times as fast. Further, we propose a metric of sequence diversity ( N d ) derived directly from Nonpareil curves that correlates well with alpha diversity assessed by traditional metrics. We use this metric in different experiments demonstrating the correlation with the Shannon index estimated on 16S rRNA gene profiles and show that N d additionally reveals seasonal patterns in marine samples that are not captured by the Shannon index and more precise rankings of the magnitude of diversity of microbial communities in different habitats. Therefore, the new version of Nonpareil, called Nonpareil 3, advances the toolbox for metagenomic analyses of microbiomes. IMPORTANCE Estimation of the coverage provided by a metagenomic data set, i.e., what fraction of the microbial community was sampled by DNA sequencing, represents an essential first step of every culture-independent genomic study that aims to robustly assess the sequence

  18. Estimating Evaporative Fraction From Readily Obtainable Variables in Mangrove Forests of the Everglades, U.S.A.

    NASA Technical Reports Server (NTRS)

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John; Barr, Jordan

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) the ratio of latent heat (LE; energy equivalent of evapotranspiration -ET-) to total available energy from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micro-meteorological and flux tower observations, or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature [T(sub s)] normalized difference vegetation index (NDVI)and daily maximum air temperature [T(sub a)]. The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using T(sub s) and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the T(sub s) from Landsat relative to the T(sub s) from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  19. Estimating evaporative fraction from readily obtainable variables in mangrove forests of the Everglades, U.S.A.

    USGS Publications Warehouse

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John W.; Barr, Jordan G.

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) – the ratio of latent heat (LE; energy equivalent of evapotranspiration –ET–) to total available energy – from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micrometeorological and flux tower observations or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature (Ts) normalized difference vegetation index (NDVI) and daily maximum air temperature (Ta). The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using Ts and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the Ts from Landsat relative to the Ts from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  20. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  1. Continuous-variable measurement-device-independent quantum key distribution: Composable security against coherent attacks

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-05-01

    We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.

  2. Testing independence of fragment lengths within VNTR loci

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisser, S.; Johnson, W.

    1993-11-01

    Methods that were devised to test independence of the bivariate fragment lengths obtained from VNTR loci are applied to several population databases. It is shown that for many of the probes independence (Hardy-Weinberg equilibrium) cannot be sustained. 3 refs., 3 tabs.

  3. Economic incentives for financial and residential independence.

    PubMed

    Whittington, L A; Peters, H E

    1996-02-01

    In this paper we examine the impact of the resource of children and of their parents on the children's transition to residential and financial independence. Previous studies of this transition focused primarily on the impact of family structure and parent-child relationships on the decision to leave home, but much less in known about the role of economic factors in the transition to independence. Using data from the Panel Study of Income Dynamics (PSID) for the period 1968-1988, we estimate discrete-hazard models of the probability of achieving residential and financial independence. We find that the child's wage opportunities and the parents' income are important determinants of establishing independence. The effect of parental income changes with the child's age. We also find some evidence that federal tax policy influences the decision to become independent, although the magnitude of this effect is quite small.

  4. A test and re-estimation of Taylor's empirical capacity-reserve relationship

    USGS Publications Warehouse

    Long, K.R.

    2009-01-01

    In 1977, Taylor proposed a constant elasticity model relating capacity choice in mines to reserves. A test of this model using a very large (n = 1,195) dataset confirms its validity but obtains significantly different estimated values for the model coefficients. Capacity is somewhat inelastic with respect to reserves, with an elasticity of 0.65 estimated for open-pit plus block-cave underground mines and 0.56 for all other underground mines. These new estimates should be useful for capacity determinations as scoping studies and as a starting point for feasibility studies. The results are robust over a wide range of deposit types, deposit sizes, and time, consistent with physical constraints on mine capacity that are largely independent of technology. ?? 2009 International Association for Mathematical Geology.

  5. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  6. Simultaneous and Continuous Estimation of Shoulder and Elbow Kinematics from Surface EMG Signals

    PubMed Central

    Zhang, Qin; Liu, Runfeng; Chen, Wenbin; Xiong, Caihua

    2017-01-01

    In this paper, we present a simultaneous and continuous kinematics estimation method for multiple DoFs across shoulder and elbow joint. Although simultaneous and continuous kinematics estimation from surface electromyography (EMG) is a feasible way to achieve natural and intuitive human-machine interaction, few works investigated multi-DoF estimation across the significant joints of upper limb, shoulder and elbow joints. This paper evaluates the feasibility to estimate 4-DoF kinematics at shoulder and elbow during coordinated arm movements. Considering the potential applications of this method in exoskeleton, prosthetics and other arm rehabilitation techniques, the estimation performance is presented with different muscle activity decomposition and learning strategies. Principle component analysis (PCA) and independent component analysis (ICA) are respectively employed for EMG mode decomposition with artificial neural network (ANN) for learning the electromechanical association. Four joint angles across shoulder and elbow are simultaneously and continuously estimated from EMG in four coordinated arm movements. By using ICA (PCA) and single ANN, the average estimation accuracy 91.12% (90.23%) is obtained in 70-s intra-cross validation and 87.00% (86.30%) is obtained in 2-min inter-cross validation. This result suggests it is feasible and effective to use ICA (PCA) with single ANN for multi-joint kinematics estimation in variant application conditions. PMID:28611573

  7. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  8. Ant-inspired density estimation via random walks

    PubMed Central

    Musco, Cameron; Su, Hsin-Hao

    2017-01-01

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146

  9. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  10. Stratifying empiric risk of schizophrenia among first degree relatives using multiple predictors in two independent Indian samples.

    PubMed

    Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N

    2016-12-01

    Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.

  11. Decoding tactile afferent activity to obtain an estimate of instantaneous force and torque applied to the fingerpad

    PubMed Central

    Birznieks, Ingvars; Redmond, Stephen J.

    2015-01-01

    Dexterous manipulation is not possible without sensory information about object properties and manipulative forces. Fundamental neuroscience has been unable to demonstrate how information about multiple stimulus parameters may be continuously extracted, concurrently, from a population of tactile afferents. This is the first study to demonstrate this, using spike trains recorded from tactile afferents innervating the monkey fingerpad. A multiple-regression model, requiring no a priori knowledge of stimulus-onset times or stimulus combination, was developed to obtain continuous estimates of instantaneous force and torque. The stimuli consisted of a normal-force ramp (to a plateau of 1.8, 2.2, or 2.5 N), on top of which −3.5, −2.0, 0, +2.0, or +3.5 mNm torque was applied about the normal to the skin surface. The model inputs were sliding windows of binned spike counts recorded from each afferent. Models were trained and tested by 15-fold cross-validation to estimate instantaneous normal force and torque over the entire stimulation period. With the use of the spike trains from 58 slow-adapting type I and 25 fast-adapting type I afferents, the instantaneous normal force and torque could be estimated with small error. This study demonstrated that instantaneous force and torque parameters could be reliably extracted from a small number of tactile afferent responses in a real-time fashion with stimulus combinations that the model had not been exposed to during training. Analysis of the model weights may reveal how interactions between stimulus parameters could be disentangled for complex population responses and could be used to test neurophysiologically relevant hypotheses about encoding mechanisms. PMID:25948866

  12. The application of parameter estimation to flight measurements to obtain lateral-directional stability derivatives of an augmented jet-flap STOL airplane

    NASA Technical Reports Server (NTRS)

    Stephenson, J. D.

    1983-01-01

    Flight experiments with an augmented jet flap STOL aircraft provided data from which the lateral directional stability and control derivatives were calculated by applying a linear regression parameter estimation procedure. The tests, which were conducted with the jet flaps set at a 65 deg deflection, covered a large range of angles of attack and engine power settings. The effect of changing the angle of the jet thrust vector was also investigated. Test results are compared with stability derivatives that had been predicted. The roll damping derived from the tests was significantly larger than had been predicted, whereas the other derivatives were generally in agreement with the predictions. Results obtained using a maximum likelihood estimation procedure are compared with those from the linear regression solutions.

  13. Cosmic homogeneity: a spectroscopic and model-independent measurement

    NASA Astrophysics Data System (ADS)

    Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.

    2018-03-01

    Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.

  14. Estimating preferences for local public services using migration data.

    PubMed

    Dahlberg, Matz; Eklöf, Matias; Fredriksson, Peter; Jofre-Monseny, Jordi

    2012-01-01

    Using Swedish micro data, the paper examines the impact of local public services on community choice. The choice of community is modelled as a choice between a discrete set of alternatives. It is found that, given taxes, high spending on child care attracts migrants. Less conclusive results are obtained with respect to the role of spending on education and elderly care. High local taxes deter migrants. Relaxing the independence of the irrelevant alternatives assumption, by estimating a mixed logit model, has a significant impact on the results.

  15. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  17. Applicant Characteristics Associated With Selection for Ranking at Independent Surgery Residency Programs.

    PubMed

    Dort, Jonathan M; Trickey, Amber W; Kallies, Kara J; Joshi, Amit R T; Sidwell, Richard A; Jarman, Benjamin T

    2015-01-01

    This study evaluated characteristics of applicants selected for interview and ranked by independent general surgery residency programs and assessed independent program application volumes, interview selection, rank list formation, and match success. Demographic and academic information was analyzed for 2014-2015 applicants. Applicant characteristics were compared by ranking status using univariate and multivariable statistical techniques. Characteristics independently associated with whether or not an applicant was ranked were identified using multivariable logistic regression modeling with backward stepwise variable selection and cluster-correlated robust variance estimates to account for correlations among individuals who applied to multiple programs. The Electronic Residency Application Service was used to obtain applicant data and program match outcomes at 33 independent surgery programs. All applicants selected to interview at 33 participating independent general surgery residency programs were included in the study. Applicants were 60% male with median age of 26 years. Birthplace was well distributed. Most applicants (73%) had ≥1 academic publication. Median United States Medical Licensing Exams (USMLE) Step 1 score was 228 (interquartile range: 218-240), and median USMLE Step 2 clinical knowledge score was 241 (interquartile range: 231-250). Residency programs in some regions more often ranked applicants who attended medical school within the same region. On multivariable analysis, significant predictors of ranking by an independent residency program were: USMLE scores, medical school region, and birth region. Independent programs received an average of 764 applications (range: 307-1704). On average, 12% interviews, and 81% of interviewed applicants were ranked. Most programs (84%) matched at least 1 applicant ranked in their top 10. Participating independent programs attract a large volume of applicants and have high standards in the selection process

  18. A first application of independent component analysis to extracting structure from stock returns.

    PubMed

    Back, A D; Weigend, A S

    1997-08-01

    This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).

  19. Carrying Position Independent User Heading Estimation for Indoor Pedestrian Navigation with Smartphones

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Hu, Ying; Cui, Yang

    2016-01-01

    This paper proposes a novel heading estimation approach for indoor pedestrian navigation using the built-in inertial sensors on a smartphone. Unlike previous approaches constraining the carrying position of a smartphone on the user’s body, our approach gives the user a larger freedom by implementing automatic recognition of the device carrying position and subsequent selection of an optimal strategy for heading estimation. We firstly predetermine the motion state by a decision tree using an accelerometer and a barometer. Then, to enable accurate and computational lightweight carrying position recognition, we combine a position classifier with a novel position transition detection algorithm, which may also be used to avoid the confusion between position transition and user turn during pedestrian walking. For a device placed in the trouser pockets or held in a swinging hand, the heading estimation is achieved by deploying a principal component analysis (PCA)-based approach. For a device held in the hand or against the ear during a phone call, user heading is directly estimated by adding the yaw angle of the device to the related heading offset. Experimental results show that our approach can automatically detect carrying positions with high accuracy, and outperforms previous heading estimation approaches in terms of accuracy and applicability. PMID:27187391

  20. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. A GRASS GIS module to obtain an estimation of glacier behavior under climate change: A pilot study on Italian glacier

    NASA Astrophysics Data System (ADS)

    Strigaro, Daniele; Moretti, Massimiliano; Mattavelli, Matteo; Frigerio, Ivan; Amicis, Mattia De; Maggi, Valter

    2016-09-01

    The aim of this work is to integrate the Minimal Glacier Model in a Geographic Information System Python module in order to obtain spatial simulations of glacier retreat and to assess the future scenarios with a spatial representation. The Minimal Glacier Models are a simple yet effective way of estimating glacier response to climate fluctuations. This module can be useful for the scientific and glaciological community in order to evaluate glacier behavior, driven by climate forcing. The module, called r.glacio.model, is developed in a GRASS GIS (GRASS Development Team, 2016) environment using Python programming language combined with different libraries as GDAL, OGR, CSV, math, etc. The module is applied and validated on the Rutor glacier, a glacier in the south-western region of the Italian Alps. This glacier is very large in size and features rather regular and lively dynamics. The simulation is calibrated by reconstructing the 3-dimensional dynamics flow line and analyzing the difference between the simulated flow line length variations and the observed glacier fronts coming from ortophotos and DEMs. These simulations are driven by the past mass balance record. Afterwards, the future assessment is estimated by using climatic drivers provided by a set of General Circulation Models participating in the Climate Model Inter-comparison Project 5 effort. The approach devised in r.glacio.model can be applied to most alpine glaciers to obtain a first-order spatial representation of glacier behavior under climate change.

  2. Meta-analysis of pathway enrichment: combining independent and dependent omics data sets.

    PubMed

    Kaever, Alexander; Landesfeind, Manuel; Feussner, Kirstin; Morgenstern, Burkhard; Feussner, Ivo; Meinicke, Peter

    2014-01-01

    A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics technologies is highly desirable. A popular method for the knowledge-based interpretation of single data sets is the (Gene) Set Enrichment Analysis. In order to combine the results from different analyses, we introduce a methodical framework for the meta-analysis of p-values obtained from Pathway Enrichment Analysis (Set Enrichment Analysis based on pathways) of multiple dependent or independent data sets from different omics platforms. For dependent data sets, e.g. obtained from the same biological samples, the framework utilizes a covariance estimation procedure based on the nonsignificant pathways in single data set enrichment analysis. The framework is evaluated and applied in the joint analysis of Metabolomics mass spectrometry and Transcriptomics DNA microarray data in the context of plant wounding. In extensive studies of simulated data set dependence, the introduced correlation could be fully reconstructed by means of the covariance estimation based on pathway enrichment. By restricting the range of p-values of pathways considered in the estimation, the overestimation of correlation, which is introduced by the significant pathways, could be reduced. When applying the proposed methods to the real data sets, the meta-analysis was shown not only to be a powerful tool to investigate the correlation between different data sets and summarize the results of multiple analyses but also to distinguish experiment-specific key pathways.

  3. Measures of Physical and Mental Independence Among HIV-Positive Individuals: Impact of Substance Use Disorder.

    PubMed

    Christensen, Bianca; Qin, Zijian; Byrd, Desiree A; Yu, Fang; Morgello, Susan; Gelman, Benjamin B; Moore, David J; Grant, Igor; Singer, Elyse J; Fox, Howard S; Baccaglini, Lorena

    2017-10-01

    With the transition of HIV infection from an acute to a chronic disease after the introduction of antiretroviral medications, there has been an increased focus on long-term neurocognitive and other functional outcomes of HIV patients. Thus, we assessed factors, particularly history of a substance use disorder, associated with time to loss of measures of physical or mental independence among HIV-positive individuals. Data were obtained from the National NeuroAIDS Tissue Consortium. Kaplan-Meier and Cox proportional hazards regression analyses were used to estimate the time since HIV diagnosis to loss of independence, and to identify associated risk factors. HIV-positive participants who self-identified as physically (n = 698) or mentally (n = 616) independent on selected activities of daily living at baseline were eligible for analyses. A history of substance use disorder was associated with a higher hazard of loss of both physical and mental independence [adjusted hazard ratio (HR) = 1.71, 95% confidence interval (95% CI): 1.07-2.78; adjusted HR = 1.67, 95% CI: 1.11-2.52, respectively]. After adjusting for substance use disorder and other covariates, older age at diagnosis and female gender were associated with higher hazards of loss of both physical and mental independence, non-white participants had higher hazards of loss of physical independence, whereas participants with an abnormal neurocognitive diagnosis and fewer years of education had higher hazards of loss of mental independence. In summary, history of substance use disorder was associated with loss of measures of both physical and mental independence. The nature of this link and the means to prevent such loss of independence need further investigation.

  4. Estimating Independent Locally Shifted Random Utility Models for Ranking Data

    ERIC Educational Resources Information Center

    Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans

    2011-01-01

    We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…

  5. Attitude Estimation or Quaternion Estimation?

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2003-01-01

    The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.

  6. Equivalent water height extracted from GRACE gravity field model with robust independent component analysis

    NASA Astrophysics Data System (ADS)

    Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei

    2014-08-01

    The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.

  7. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  8. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  9. A provisional effective evaluation when errors are present in independent variables

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.

    1983-01-01

    Algorithms are examined for evaluating the parameters of a regression model when there are errors in the independent variables. The algorithms are fast and the estimates they yield are stable with respect to the correlation of errors and measurements of both the dependent variable and the independent variables.

  10. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras.

    PubMed

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.

  11. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    PubMed

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  12. Are Independent Probes Truly Independent?

    ERIC Educational Resources Information Center

    Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene

    2009-01-01

    The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…

  13. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  14. A K-means multivariate approach for clustering independent components from magnetoencephalographic data.

    PubMed

    Spadone, Sara; de Pasquale, Francesco; Mantini, Dante; Della Penna, Stefania

    2012-09-01

    Independent component analysis (ICA) is typically applied on functional magnetic resonance imaging, electroencephalographic and magnetoencephalographic (MEG) data due to its data-driven nature. In these applications, ICA needs to be extended from single to multi-session and multi-subject studies for interpreting and assigning a statistical significance at the group level. Here a novel strategy for analyzing MEG independent components (ICs) is presented, Multivariate Algorithm for Grouping MEG Independent Components K-means based (MAGMICK). The proposed approach is able to capture spatio-temporal dynamics of brain activity in MEG studies by running ICA at subject level and then clustering the ICs across sessions and subjects. Distinctive features of MAGMICK are: i) the implementation of an efficient set of "MEG fingerprints" designed to summarize properties of MEG ICs as they are built on spatial, temporal and spectral parameters; ii) the implementation of a modified version of the standard K-means procedure to improve its data-driven character. This algorithm groups the obtained ICs automatically estimating the number of clusters through an adaptive weighting of the parameters and a constraint on the ICs independence, i.e. components coming from the same session (at subject level) or subject (at group level) cannot be grouped together. The performances of MAGMICK are illustrated by analyzing two sets of MEG data obtained during a finger tapping task and median nerve stimulation. The results demonstrate that the method can extract consistent patterns of spatial topography and spectral properties across sessions and subjects that are in good agreement with the literature. In addition, these results are compared to those from a modified version of affinity propagation clustering method. The comparison, evaluated in terms of different clustering validity indices, shows that our methodology often outperforms the clustering algorithm. Eventually, these results are

  15. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  16. Incident CTS in a large pooled cohort study: associations obtained by a Job Exposure Matrix versus associations obtained from observed exposures.

    PubMed

    Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A

    2018-03-29

    There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method

    PubMed Central

    2015-01-01

    Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838

  18. Are dialysis adequacy indices independent of solute generation rate?

    PubMed

    Waniewski, Jacek; Debowska, Malgorzata; Lindholm, Bengt

    2014-01-01

    KT/V is by definition independent of solute generation rate. Alternative dialysis adequacy indices (DAIs) such as equivalent renal clearance (EKR), standard KT/V (stdKT/V), and solute removal index (SRI) are estimated as the ratio of solute mass removed to an average solute mass in the body or solute concentration in blood; both nominator and denominator in these formulas depend on the solute generation rate. Our objective was to investigate whether and under which conditions the alternative DAIs are independent of solute generation rate. By using general compartment modeling, we show that for the metabolically stable patient (in whom the solute generated during the dialysis cycle, typically, 1 week, is equal to the solute removed from the body), DAIs estimated for the dialysis cycle are in general independent of the average solute generation rate (although they may depend on the pattern of oscillations in the generation rate). However, the alternative adequacy parameters (such as EKR, stdKT/V, and SRI) may depend on solute generation rate for metabolically unstable patients.

  19. Estimation and enhancement of real-time software reliability through mutation analysis

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  20. CULTURE-INDEPENDENT MOLECULAR METHODS FOR FECAL SOURCE IDENTIFICATION

    EPA Science Inventory

    Fecal contamination is widespread in the waterways of the United States. Both to correct the problem, and to estimate public health risk, it is necessary to identify the source of the contamination. Several culture-independent molecular methods for fecal source identification hav...

  1. Estimation of rank correlation for clustered data.

    PubMed

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Independent Production Cost Estimate: XM1 Tank Main Armament Evaluation

    DTIC Science & Technology

    1977-11-01

    r V. REFERtNCfcS 1. Ammunition Cost Research Study, Gerald W. Kalal and Patrick Gannon, Jun 76, AD-A-029330. 2. Ammunition Cost Research: Medium...Bore Cannon Ammunition, Annexes A-E, Patrick Gannon, Celestino George, Gerald Kalal , Kathleen Keleher, Paul Riedesel, Joseph Robinson, Sep 75, AD-A...016104. 3. Cost Estimating Relationships for Manufacturing Hardware Cost of Gun/Howitzer Cannons, Gerald W. Kalal , Aug 72, AD-75-7163. 4. ARRCOM

  3. Improved estimation of Mars ionosphere total electron content

    NASA Astrophysics Data System (ADS)

    Cartacci, M.; Sánchez-Cano, B.; Orosei, R.; Noschese, R.; Cicchetti, A.; Witasse, O.; Cantini, F.; Rossi, A. P.

    2018-01-01

    We describe an improved method to estimate the Total Electron Content (TEC) of the Mars ionosphere from the echoes recorded by the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) (Picardi et al., 2005; Orosei et al., 2015) onboard Mars Express in its subsurface sounding mode. In particular, we demonstrate that this method solves the issue of the former algorithm described at (Cartacci et al., 2013), which produced an overestimation of TEC estimates on the day side. The MARSIS signal is affected by a phase distortion introduced by the Mars ionosphere that produces a variation of the signal shape and a delay in its travel time. The new TEC estimation is achieved correlating the parameters obtained through the correction of the aforementioned effects. In detail, the knowledge of the quadratic term of the phase distortion estimated by the Contrast Method (Cartacci et al., 2013), together with the linear term (i.e. the extra time delay), estimated through a radar signal simulator, allows to develop a new algorithm particularly well suited to estimate the TEC for solar zenith angles (SZA) lower than 95° The new algorithm for the dayside has been validated with independent data from MARSIS in its Active Ionospheric Sounding (AIS) operational mode, with comparisons with other previous algorithms based on MARSIS subsurface data, with modeling and with modeling ionospheric distortion TEC reconstruction.

  4. Comparison of Sun-Induced Chlorophyll Fluorescence Estimates Obtained from Four Portable Field Spectroradiometers

    NASA Technical Reports Server (NTRS)

    Julitta, Tommaso; Corp, Lawrence A.; Rossini, Micol; Burkart, Andreas; Cogliati, Sergio; Davies, Neville; Hom, Milton; Mac Arthur, Alasdair; Middleton, Elizabeth M.; Rascher, Uwe; hide

    2016-01-01

    Remote Sensing of Sun-Induced Chlorophyll Fluorescence (SIF) is a research field of growing interest because it offers the potential to quantify actual photosynthesis and to monitor plant status. New satellite missions from the European Space Agency, such as the Earth Explorer 8 FLuorescence EXplorer (FLEX) mission-scheduled to launch in 2022 and aiming at SIF mapping-and from the National Aeronautics and Space Administration (NASA) such as the Orbiting Carbon Observatory-2 (OCO-2) sampling mission launched in July 2014, provide the capability to estimate SIF from space. The detection of the SIF signal from airborne and satellite platform is difficult and reliable ground level data are needed for calibration/validation. Several commercially available spectroradiometers are currently used to retrieve SIF in the field. This study presents a comparison exercise for evaluating the capability of four spectroradiometers to retrieve SIF. The results show that an accurate far-red SIF estimation can be achieved using spectroradiometers with an ultrafine resolution (less than 1 nm), while the red SIF estimation requires even higher spectral resolution (less than 0.5 nm). Moreover, it is shown that the Signal to Noise Ratio (SNR) plays a significant role in the precision of the far-red SIF measurements.

  5. An independent Cepheid distance scale: Current status

    NASA Technical Reports Server (NTRS)

    Barnes, T. G., III

    1980-01-01

    An independent distance scale for Cepheid variables is discussed. The apparent magnitude and the visual surface brightness, inferred from an appropriate color index, are used to determine the angular diameter variation of the Cepheid. When combined with the linear displacement curve obtained from the integrated radial velocity curve, the distance and linear radius are determined. The attractiveness of the method is its complete independence of all other stellar distance scales, even though a number of practical difficulties currently exist in implementing the technique.

  6. Through-the-Wall Localization of a Moving Target by Two Independent Ultra Wideband (UWB) Radar Systems

    PubMed Central

    Kocur, Dušan; Švecová, Mária; Rovňáková, Jana

    2013-01-01

    In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered. PMID:24021968

  7. Through-the-wall localization of a moving target by two independent ultra wideband (UWB) radar systems.

    PubMed

    Kocur, Dušan; Svecová, Mária; Rovňáková, Jana

    2013-09-09

    In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered.

  8. Body Parts Dependent Joint Regressors for Human Pose Estimation in Still Images.

    PubMed

    Dantone, Matthias; Gall, Juergen; Leistner, Christian; Van Gool, Luc

    2014-11-01

    In this work, we address the problem of estimating 2d human pose from still images. Articulated body pose estimation is challenging due to the large variation in body poses and appearances of the different body parts. Recent methods that rely on the pictorial structure framework have shown to be very successful in solving this task. They model the body part appearances using discriminatively trained, independent part templates and the spatial relations of the body parts using a tree model. Within such a framework, we address the problem of obtaining better part templates which are able to handle a very high variation in appearance. To this end, we introduce parts dependent body joint regressors which are random forests that operate over two layers. While the first layer acts as an independent body part classifier, the second layer takes the estimated class distributions of the first one into account and is thereby able to predict joint locations by modeling the interdependence and co-occurrence of the parts. This helps to overcome typical ambiguities of tree structures, such as self-similarities of legs and arms. In addition, we introduce a novel data set termed FashionPose that contains over 7,000 images with a challenging variation of body part appearances due to a large variation of dressing styles. In the experiments, we demonstrate that the proposed parts dependent joint regressors outperform independent classifiers or regressors. The method also performs better or similar to the state-of-the-art in terms of accuracy, while running with a couple of frames per second.

  9. Application of copulas to improve covariance estimation for partial least squares.

    PubMed

    D'Angelo, Gina M; Weissfeld, Lisa A

    2013-02-20

    Dimension reduction techniques, such as partial least squares, are useful for computing summary measures and examining relationships in complex settings. Partial least squares requires an estimate of the covariance matrix as a first step in the analysis, making this estimate critical to the results. In addition, the covariance matrix also forms the basis for other techniques in multivariate analysis, such as principal component analysis and independent component analysis. This paper has been motivated by an example from an imaging study in Alzheimer's disease where there is complete separation between Alzheimer's and control subjects for one of the imaging modalities. This separation occurs in one block of variables and does not occur with the second block of variables resulting in inaccurate estimates of the covariance. We propose the use of a copula to obtain estimates of the covariance in this setting, where one set of variables comes from a mixture distribution. Simulation studies show that the proposed estimator is an improvement over the standard estimators of covariance. We illustrate the methods from the motivating example from a study in the area of Alzheimer's disease. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Realizing the measure-device-independent quantum-key-distribution with passive heralded-single photon sources

    PubMed Central

    Wang, Qin; Zhou, Xing-Yu; Guo, Guang-Can

    2016-01-01

    In this paper, we put forward a new approach towards realizing measurement-device-independent quantum key distribution with passive heralded single-photon sources. In this approach, both Alice and Bob prepare the parametric down-conversion source, where the heralding photons are labeled according to different types of clicks from the local detectors, and the heralded ones can correspondingly be marked with different tags at the receiver’s side. Then one can obtain four sets of data through using only one-intensity of pump light by observing different kinds of clicks of local detectors. By employing the newest formulae to do parameter estimation, we could achieve very precise prediction for the two-single-photon pulse contribution. Furthermore, by carrying out corresponding numerical simulations, we compare the new method with other practical schemes of measurement-device-independent quantum key distribution. We demonstrate that our new proposed passive scheme can exhibit remarkable improvement over the conventional three-intensity decoy-state measurement-device-independent quantum key distribution with either heralded single-photon sources or weak coherent sources. Besides, it does not need intensity modulation and can thus diminish source-error defects existing in several other active decoy-state methods. Therefore, if taking intensity modulating errors into account, our new method will show even more brilliant performance. PMID:27759085

  11. National Practice Patterns of Obtaining Informed Consent for Stroke Thrombolysis.

    PubMed

    Mendelson, Scott J; Courtney, D Mark; Gordon, Elisa J; Thomas, Leena F; Holl, Jane L; Prabhakaran, Shyam

    2018-03-01

    No standard approach to obtaining informed consent for stroke thrombolysis with tPA (tissue-type plasminogen activator) currently exists. We aimed to assess current nationwide practice patterns of obtaining informed consent for tPA. An online survey was developed and distributed by e-mail to clinicians involved in acute stroke care. Multivariable logistic regression analyses were performed to determine independent factors contributing to always obtaining informed consent for tPA. Among 268 respondents, 36.7% reported always obtaining informed consent and 51.8% reported the informed consent process caused treatment delays. Being an emergency medicine physician (odds ratio, 5.8; 95% confidence interval, 2.9-11.5) and practicing at a nonacademic medical center (odds ratio, 2.1; 95% confidence interval, 1.0-4.3) were independently associated with always requiring informed consent. The most commonly cited cause of delay was waiting for a patient's family to reach consensus about treatment. Most clinicians always or often require informed consent for stroke thrombolysis. Future research should focus on standardizing content and delivery of tPA information to reduce delays. © 2018 American Heart Association, Inc.

  12. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  13. The impact of calibration and clock-model choice on molecular estimates of divergence times.

    PubMed

    Duchêne, Sebastián; Lanfear, Robert; Ho, Simon Y W

    2014-09-01

    Phylogenetic estimates of evolutionary timescales can be obtained from nucleotide sequence data using the molecular clock. These estimates are important for our understanding of evolutionary processes across all taxonomic levels. The molecular clock needs to be calibrated with an independent source of information, such as fossil evidence, to allow absolute ages to be inferred. Calibration typically involves fixing or constraining the age of at least one node in the phylogeny, enabling the ages of the remaining nodes to be estimated. We conducted an extensive simulation study to investigate the effects of the position and number of calibrations on the resulting estimate of the timescale. Our analyses focused on Bayesian estimates obtained using relaxed molecular clocks. Our findings suggest that an effective strategy is to include multiple calibrations and to prefer those that are close to the root of the phylogeny. Under these conditions, we found that evolutionary timescales could be estimated accurately even when the relaxed-clock model was misspecified and when the sequence data were relatively uninformative. We tested these findings in a case study of simian foamy virus, where we found that shallow calibrations caused the overall timescale to be underestimated by up to three orders of magnitude. Finally, we provide some recommendations for improving the practice of molecular-clock calibration. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Estimates of the organic aerosol volatility in a boreal forest using two independent methods

    NASA Astrophysics Data System (ADS)

    Hong, Juan; Äijälä, Mikko; Häme, Silja A. K.; Hao, Liqing; Duplissy, Jonathan; Heikkinen, Liine M.; Nie, Wei; Mikkilä, Jyri; Kulmala, Markku; Prisle, Nønne L.; Virtanen, Annele; Ehn, Mikael; Paasonen, Pauli; Worsnop, Douglas R.; Riipinen, Ilona; Petäjä, Tuukka; Kerminen, Veli-Matti

    2017-03-01

    The volatility distribution of secondary organic aerosols that formed and had undergone aging - i.e., the particle mass fractions of semi-volatile, low-volatility and extremely low volatility organic compounds in the particle phase - was characterized in a boreal forest environment of Hyytiälä, southern Finland. This was done by interpreting field measurements using a volatility tandem differential mobility analyzer (VTDMA) with a kinetic evaporation model. The field measurements were performed during April and May 2014. On average, 40 % of the organics in particles were semi-volatile, 34 % were low-volatility organics and 26 % were extremely low volatility organics. The model was, however, very sensitive to the vaporization enthalpies assumed for the organics (ΔHVAP). The best agreement between the observed and modeled temperature dependence of the evaporation was obtained when effective vaporization enthalpy values of 80 kJ mol-1 were assumed. There are several potential reasons for the low effective enthalpy value, including molecular decomposition or dissociation that might occur in the particle phase upon heating, mixture effects and compound-dependent uncertainties in the mass accommodation coefficient. In addition to the VTDMA-based analysis, semi-volatile and low-volatility organic mass fractions were independently determined by applying positive matrix factorization (PMF) to high-resolution aerosol mass spectrometer (HR-AMS) data. The factor separation was based on the oxygenation levels of organics, specifically the relative abundance of mass ions at m/z 43 (f43) and m/z 44 (f44). The mass fractions of these two organic groups were compared against the VTDMA-based results. In general, the best agreement between the VTDMA results and the PMF-derived mass fractions of organics was obtained when ΔHVAP = 80 kJ mol-1 was set for all organic groups in the model, with a linear correlation coefficient of around 0.4. However, this still indicates that only

  15. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  16. The Petersen-Lincoln estimator and its extension to estimate the size of a shared population.

    PubMed

    Chao, Anne; Pan, H-Y; Chiang, Shu-Chuan

    2008-12-01

    The Petersen-Lincoln estimator has been used to estimate the size of a population in a single mark release experiment. However, the estimator is not valid when the capture sample and recapture sample are not independent. We provide an intuitive interpretation for "independence" between samples based on 2 x 2 categorical data formed by capture/non-capture in each of the two samples. From the interpretation, we review a general measure of "dependence" and quantify the correlation bias of the Petersen-Lincoln estimator when two types of dependences (local list dependence and heterogeneity of capture probability) exist. An important implication in the census undercount problem is that instead of using a post enumeration sample to assess the undercount of a census, one should conduct a prior enumeration sample to avoid correlation bias. We extend the Petersen-Lincoln method to the case of two populations. This new estimator of the size of the shared population is proposed and its variance is derived. We discuss a special case where the correlation bias of the proposed estimator due to dependence between samples vanishes. The proposed method is applied to a study of the relapse rate of illicit drug use in Taiwan. ((c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim).

  17. A New Method for Estimating the Effective Population Size from Allele Frequency Changes

    PubMed Central

    Pollak, Edward

    1983-01-01

    A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147

  18. Estimation of Geodetic and Geodynamical Parameters with VieVS

    NASA Technical Reports Server (NTRS)

    Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald

    2010-01-01

    Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.

  19. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    PubMed

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Estimating the cost of unclaimed electronic prescriptions at an independent pharmacy.

    PubMed

    Doucette, William R; Connolly, Connie; Al-Jumaili, Ali Azeez

    2016-01-01

    The increasing rate of e-prescribing is associated with a significant number of unclaimed prescriptions. The costs of unclaimed e-prescriptions could create an unwanted burden on community pharmacy practices. The objective of this study was to calculate the rate and costs of filled but unclaimed e-prescriptions at an independent pharmacy. This study was performed at a rural independent pharmacy in a Midwestern state. The rate and costs of the unclaimed e-prescriptions were determined by collecting information about all unclaimed e-prescriptions for a 6-month period from August 2013 to January 2014. The costs of unclaimed prescriptions included those expenses incurred to prepare the prescription, contact the patient, and return the unclaimed prescription to inventory. Two sensitivity analyses were conducted. The total cost of 147 unclaimed e-prescriptions equaled $3,677.70 for the study period. Thus, the monthly cost of unclaimed e-prescriptions was $612.92 and the average cost of each unclaimed prescription was $25.02. The sensitivity analyses showed that using a technician to perform prescription return tasks reduced average costs to $19.33 and that using a state Medicaid cost of dispensing resulted in average costs of $18.54 per prescription. The rate of unclaimed e-prescriptions was 0.82%. The percentage of unclaimed e-prescriptions in this pharmacy was less than 1%. In addition to increased cost, unclaimed e-prescriptions add inefficiency to the work flow of the pharmacy staff, which can limit the time that they are available for performing revenue-generating activities. Adjustments to work flow and insurer policies could help to reduce the burden of unclaimed e-prescriptions. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  1. Comment on atomic independent-particle models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doda, D.D.; Gravey, R.H.; Green, A.E.S.

    1975-08-01

    The Hartree-Fock-Slater (HFS) independent-particle model in the form developed by Hermann and Skillman (HS) and the Green, Sellin, and Zachor (GSZ) analytic independent-particle model are being used for many types of applications of atomic theory to avoid cumbersome, albeit more rigorous, many-body calculations. The single-electron eigenvalues obtained with these models are examined and it is found that the GSZ model is capable of yielding energy eigenvalues for valence electrons which are substantially closer to experimental values than are the results of HS-HFS calculations. With the aid of an analytic representation of the equivalent HS-HFS screening function, the difficulty with thismore » model is identified as a weakness of the potential in the neighborhood of the valence shell. Accurate representations of valence states are important in most atomic applications of the independent-particle model. (auth)« less

  2. Multiparameter Estimation in Networked Quantum Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  3. Multiparameter Estimation in Networked Quantum Sensors

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-01

    We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.

  4. Multiparameter Estimation in Networked Quantum Sensors

    DOE PAGES

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-21

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  5. Demographic estimation methods for plants with unobservable life-states

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.; Schaub, M.

    2005-01-01

    Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous

  6. Are independent probes truly independent?

    PubMed

    Camp, Gino; Pecher, Diane; Schmidt, Henk G; Zeelenberg, René

    2009-07-01

    The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval process. Participants first studied a subset of cues (e.g., rope) that were subsequently studied together with a target in a 2nd study phase (e.g., rope-sailing, sunflower-yellow). In the test phase, an extralist category cue (e.g., sports, color) was presented, and participants were instructed to recall an item from the study list that was a member of the category (e.g., sailing, yellow). The experiments showed that previous study of the paired-associate word (e.g., rope) enhanced category cued recall even though this word was not presented at test. This experimental demonstration of covert cuing has important implications for the effectiveness of the independent cue technique.

  7. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    PubMed

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. A MODIS-based analysis of the Val d'Agri Oil Center (South of Italy) thermal emission: an independent gas flaring estimation strategy

    NASA Astrophysics Data System (ADS)

    Pergola, Nicola; Faruolo, Mariapia; Irina, Coviello; Carolina, Filizzola; Teodosio, Lacava; Valerio, Tramutoli

    2014-05-01

    Different kinds of atmospheric pollution affect human health and the environment at local and global scale. The petroleum industry represents one of the most important environmental pollution sources, accounting for about 18% of well-to-wheels greenhouse gas (GHG) emissions. The main pollution source is represented by the flaring of gas, one of the most challenging energy and environmental problems facing the world today. The World Bank has estimated that 150 billion cubic meters of natural gas are being flared annually, that is equivalent to 30% of the European Union's gas consumption. Since 2002, satellite-based methodologies have shown their capability in providing independent and reliable estimation of gas flaring emissions, at both national and global scale. In this paper, for the first time, the potential of satellite data in estimating gas flaring volumes emitted from a single on-shore crude oil pre-treatment plant, i.e. the Ente Nazionale Idrocarburi (ENI) Val d'Agri Oil Center (COVA), located in the Basilicata Region (South of Italy), was assessed. Specifically, thirteen years of night-time Moderate Resolution Imaging Spectroradiometer (MODIS) data acquired in the medium and thermal infrared (MIR and TIR, respectively) bands were processed. The Robust Satellite Techniques (RST) approach was implemented for identifying anomalous values of the signals under investigation (i.e. the MIR-TIR difference one), associated to the COVA flares emergency discharges. Then, the Fire Radiative Power (FRP), computed for the thermal anomalies previously identified, was correlated to the emitted gas flaring volumes, available for the COVA in the period 2003 - 2009, defining a satellite based regression model for estimating COVA gas flaring emitted volumes. The used strategy and the preliminary results of this analysis will be described in detail in this work.

  9. Comparison of Algorithm-based Estimates of Occupational Diesel Exhaust Exposure to Those of Multiple Independent Raters in a Population-based Case–Control Study

    PubMed Central

    Friesen, Melissa C.

    2013-01-01

    Objectives: Algorithm-based exposure assessments based on patterns in questionnaire responses and professional judgment can readily apply transparent exposure decision rules to thousands of jobs quickly. However, we need to better understand how algorithms compare to a one-by-one job review by an exposure assessor. We compared algorithm-based estimates of diesel exhaust exposure to those of three independent raters within the New England Bladder Cancer Study, a population-based case–control study, and identified conditions under which disparities occurred in the assessments of the algorithm and the raters. Methods: Occupational diesel exhaust exposure was assessed previously using an algorithm and a single rater for all 14 983 jobs reported by 2631 study participants during personal interviews conducted from 2001 to 2004. Two additional raters independently assessed a random subset of 324 jobs that were selected based on strata defined by the cross-tabulations of the algorithm and the first rater’s probability assessments for each job, oversampling their disagreements. The algorithm and each rater assessed the probability, intensity and frequency of occupational diesel exhaust exposure, as well as a confidence rating for each metric. Agreement among the raters, their aggregate rating (average of the three raters’ ratings) and the algorithm were evaluated using proportion of agreement, kappa and weighted kappa (κw). Agreement analyses on the subset used inverse probability weighting to extrapolate the subset to estimate agreement for all jobs. Classification and Regression Tree (CART) models were used to identify patterns in questionnaire responses that predicted disparities in exposure status (i.e., unexposed versus exposed) between the first rater and the algorithm-based estimates. Results: For the probability, intensity and frequency exposure metrics, moderate to moderately high agreement was observed among raters (κw = 0.50–0.76) and between the

  10. Estimating Aquifer Properties Using Sinusoidal Pumping Tests

    NASA Astrophysics Data System (ADS)

    Rasmussen, T. C.; Haborak, K. G.; Young, M. H.

    2001-12-01

    We develop the theoretical and applied framework for using sinusoidal pumping tests to estimate aquifer properties for confined, leaky, and partially penetrating conditions. The framework 1) derives analytical solutions for three boundary conditions suitable for many practical applications, 2) validates the analytical solutions against a finite element model, 3) establishes a protocol for conducting sinusoidal pumping tests, and 4) estimates aquifer hydraulic parameters based on the analytical solutions. The analytical solutions to sinusoidal stimuli in radial coordinates are derived for boundary value problems that are analogous to the Theis (1935) confined aquifer solution, the Hantush and Jacob (1955) leaky aquifer solution, and the Hantush (1964) partially penetrated confined aquifer solution. The analytical solutions compare favorably to a finite-element solution of a simulated flow domain, except in the region immediately adjacent to the pumping well where the implicit assumption of zero borehole radius is violated. The procedure is demonstrated in one unconfined and two confined aquifer units near the General Separations Area at the Savannah River Site, a federal nuclear facility located in South Carolina. Aquifer hydraulic parameters estimated using this framework provide independent confirmation of parameters obtained from conventional aquifer tests. The sinusoidal approach also resulted in the elimination of investigation-derived wastes.

  11. A law of order estimation and leading-order terms for a family of averaged quantities on a multibaker chain system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Hideshi, E-mail: ishida@me.es.osaka-u.ac.jp

    2014-06-15

    In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. Thesemore » deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.« less

  12. Methods for estimating the magnitude and frequency of peak streamflows for unregulated streams in Oklahoma

    USGS Publications Warehouse

    Lewis, Jason M.

    2010-01-01

    Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.

  13. Independent Correlates of Reported Gambling Problems amongst Indigenous Australians

    ERIC Educational Resources Information Center

    Stevens, Matthew; Young, Martin

    2010-01-01

    To identify independent correlates of reported gambling problems amongst the Indigenous population of Australia. A cross-sectional design was applied to a nationally representative sample of the Indigenous population. Estimates of reported gambling problems are presented by remoteness and jurisdiction. Multivariable logistic regression was used to…

  14. Implicit Learning of Viewpoint-Independent Spatial Layouts

    PubMed Central

    Tsuchiai, Taiga; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2012-01-01

    We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We investigated the transfer of the contextual cueing effect to images from a different viewpoint by using visual search displays of 3D objects. For images from a different viewpoint, the contextual cueing effect was maintained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in environment-centered coordinates and suggests that the spatial representation of object layouts can be obtained and updated implicitly. We also showed that binocular disparity plays an important role in the layout representations. PMID:22740837

  15. Application of independent component analysis for speech-music separation using an efficient score function estimation

    NASA Astrophysics Data System (ADS)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  16. Estimation of 1RM for knee extension based on the maximal isometric muscle strength and body composition.

    PubMed

    Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo

    2017-11-01

    [Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.

  17. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  18. Radiation-force-based estimation of acoustic attenuation using harmonic motion imaging (HMI) in phantoms and in vitro livers before and after HIFU ablation.

    PubMed

    Chen, Jiangang; Hou, Gary Y; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa

    2015-10-07

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n = 5) and in vitro canine livers (n = 3) were tested, as well as HIFU lesions in in vitro canine livers (n = 5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R² = 0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32   ±   0.03 dB cm(-1) MHz(-1), which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58   ±   0.06 dB cm(-1) MHz(-1)) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.

  19. Radiation-force-based Estimation of Acoustic Attenuation Using Harmonic Motion Imaging (HMI) in Phantoms and in vitro Livers Before and After HIFU Ablation

    PubMed Central

    Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco

    2015-01-01

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of Harmonic Motion Imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5) and in vitro canine livers (n=3) were tested, as well as HIFU lesions in in vitro canine livers (n=5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R2=0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32±0.03 dB/cm/MHz, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58±0.06 dB/cm/MHz) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation. PMID:26371501

  20. Estimation of scattering object characteristics for image reconstruction using a nonzero background.

    PubMed

    Jin, Jing; Astheimer, Jeffrey; Waag, Robert

    2010-06-01

    Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.

  1. A Laboratory Study on the Reliability Estimations of the Mini-CEX

    ERIC Educational Resources Information Center

    de Lima, Alberto Alves; Conde, Diego; Costabel, Juan; Corso, Juan; Van der Vleuten, Cees

    2013-01-01

    Reliability estimations of workplace-based assessments with the mini-CEX are typically based on real-life data. Estimations are based on the assumption of local independence: the object of the measurement should not be influenced by the measurement itself and samples should be completely independent. This is difficult to achieve. Furthermore, the…

  2. deltaGseg: macrostate estimation via molecular dynamics simulations and multiscale time series analysis.

    PubMed

    Low, Diana H P; Motakis, Efthymios

    2013-10-01

    Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.

  3. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  4. Estimating salinity stress in sugarcane fields with spaceborne hyperspectral vegetation indices

    NASA Astrophysics Data System (ADS)

    Hamzeh, S.; Naseri, A. A.; AlaviPanah, S. K.; Mojaradi, B.; Bartholomeus, H. M.; Clevers, J. G. P. W.; Behzad, M.

    2013-04-01

    The presence of salt in the soil profile negatively affects the growth and development of vegetation. As a result, the spectral reflectance of vegetation canopies varies for different salinity levels. This research was conducted to (1) investigate the capability of satellite-based hyperspectral vegetation indices (VIs) for estimating soil salinity in agricultural fields, (2) evaluate the performance of 21 existing VIs and (3) develop new VIs based on a combination of wavelengths sensitive for multiple stresses and find the best one for estimating soil salinity. For this purpose a Hyperion image of September 2, 2010, and data on soil salinity at 108 locations in sugarcane (Saccharum officina L.) fields were used. Results show that soil salinity could well be estimated by some of these VIs. Indices related to chlorophyll absorption bands or based on a combination of chlorophyll and water absorption bands had the highest correlation with soil salinity. In contrast, indices that are only based on water absorption bands had low to medium correlations, while indices that use only visible bands did not perform well. From the investigated indices the optimized soil-adjusted vegetation index (OSAVI) had the strongest relationship (R2 = 0.69) with soil salinity for the training data, but it did not perform well in the validation phase. The validation procedure showed that the new salinity and water stress indices (SWSI) implemented in this study (SWSI-1, SWSI-2, SWSI-3) and the Vogelmann red edge index yielded the best results for estimating soil salinity for independent fields with root mean square errors of 1.14, 1.15, 1.17 and 1.15 dS/m, respectively. Our results show that soil salinity could be estimated by satellite-based hyperspectral VIs, but validation of obtained models for independent data is essential for selecting the best model.

  5. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  6. Loss of ability to work and ability to live independently in Parkinson's disease.

    PubMed

    Jasinska-Myga, Barbara; Heckman, Michael G; Wider, Christian; Putzke, John D; Wszolek, Zbigniew K; Uitti, Ryan J

    2012-02-01

    Ability to work and live independently is of particular concern for patients with Parkinson's disease (PD). We studied a series of PD patients able to work or live independently at baseline, and evaluated potential risk factors for two separate outcomes: loss of ability to work and loss of ability to live independently. The series comprised 495 PD patients followed prospectively. Ability to work and ability to live independently were based on clinical interview and examination. Cox regression models adjusted for age and disease duration were used to evaluate associations of baseline characteristics with loss of ability to work and loss of ability to live independently. Higher UPDRS dyskinesia score, UPDRS instability score, UPDRS total score, Hoehn and Yahr stage, and presence of intellectual impairment at baseline were all associated with increased risk of future loss of ability to work and loss of ability to live independently (P ≤ 0.0033). Five years after initial visit, for patients ≤70 years of age with a disease duration ≤4 years at initial visit, 88% were still able to work and 90% to live independently. These estimates worsened as age and disease duration at initial visit increased; for patients >70 years of age with a disease duration >4 years, estimates at 5 years were 43% able to work and 57% able to live independently. The information provided in this study can offer useful information for PD patients in preparing for future ability to perform activities of daily living. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Imaging growth and isocitrate dehydrogenase 1 mutation are independent predictors for diffuse low-grade gliomas

    PubMed Central

    Gozé, Catherine; Blonski, Marie; Le Maistre, Guillaume; Bauchet, Luc; Dezamis, Edouard; Page, Philippe; Varlet, Pascale; Capelle, Laurent; Devaux, Bertrand; Taillandier, Luc; Duffau, Hugues; Pallud, Johan

    2014-01-01

    Background We explored whether spontaneous imaging tumor growth (estimated by the velocity of diametric expansion) and isocitrate dehydrogenase 1 (IDH1) mutation (estimated by IDH1 immunoexpression) were independent predictors of long-term outcomes of diffuse low-grade gliomas in adults. Methods One hundred thirty-one adult patients with newly diagnosed supratentorial diffuse low-grade gliomas were retrospectively studied. Results Isocitrate dehydrogenase 1 mutations were present in 107 patients. The mean spontaneous velocity of diametric expansion was 5.40 ± 5.46 mm/y. During follow-up (mean, 70 ± 54.7 mo), 56 patients presented a malignant transformation and 23 died. The median malignant progression-free survival and the overall survival were significantly longer in cases of slow velocity of diametric expansion (149 and 198 mo, respectively) than in cases of fast velocity of diametric expansion (46 and 82 mo; P < .001 and P < .001, respectively) and in cases with IDH1 mutation (100 and 198 mo, respectively) than in cases without IDH1 mutation (72 mo and not reached; P = .028 and P = .001, respectively). In multivariate analyses, spontaneous velocity of diametric expansion and IDH1 mutation were independent prognostic factors for malignant progression-free survival (P < .001; hazard ratio, 4.23; 95% CI, 1.81–9.40 and P = .019; hazard ratio, 2.39; 95% CI, 1.19–4.66, respectively) and for overall survival (P < .001; hazard ratio, 26.3; 95% CI, 5.42–185.2 and P = .007; hazard ratio, 17.89; 95% CI, 2.15–200.1, respectively). Conclusions The spontaneous velocity of diametric expansion and IDH1 mutation status are 2 independent prognostic values that should be obtained at the beginning of the management of diffuse low-grade gliomas in adults. PMID:24847087

  8. Correction of 3D rigid body motion in fMRI time series by independent estimation of rotational and translational effects in k-space.

    PubMed

    Costagli, Mauro; Waggoner, R Allen; Ueno, Kenichi; Tanaka, Keiji; Cheng, Kang

    2009-04-15

    In functional magnetic resonance imaging (fMRI), even subvoxel motion dramatically corrupts the blood oxygenation level-dependent (BOLD) signal, invalidating the assumption that intensity variation in time is primarily due to neuronal activity. Thus, correction of the subject's head movements is a fundamental step to be performed prior to data analysis. Most motion correction techniques register a series of volumes assuming that rigid body motion, characterized by rotational and translational parameters, occurs. Unlike the most widely used applications for fMRI data processing, which correct motion in the image domain by numerically estimating rotational and translational components simultaneously, the algorithm presented here operates in a three-dimensional k-space, to decouple and correct rotations and translations independently, offering new ways and more flexible procedures to estimate the parameters of interest. We developed an implementation of this method in MATLAB, and tested it on both simulated and experimental data. Its performance was quantified in terms of square differences and center of mass stability across time. Our data show that the algorithm proposed here successfully corrects for rigid-body motion, and its employment in future fMRI studies is feasible and promising.

  9. Estimation of Rainfall Rates from Passive Microwave Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Sharma, Awdhesh Kumar

    Rainfall rates have been estimated using the passive microwave and visible/infrared remote sensing techniques. Data of September 14, 1978 from the Scanning Multichannel Microwave Radiometer (SMMR) on board SEA SAT-A and the Visible and Infrared Spin Scan Radiometer (VISSR) on board GOES-W (Geostationary Operational Environmental Satellite - West) was obtained and analyzed for rainfall rate retrieval. Microwave brightness temperatures (MBT) are simulated, using the microwave radiative transfer model (MRTM) and atmospheric scattering models. These MBT were computed as a function of rates of rainfall from precipitating clouds which are in a combined phase of ice and water. Microwave extinction due to ice and liquid water are calculated using Mie-theory and Gamma drop size distributions. Microwave absorption due to oxygen and water vapor are based on the schemes given by Rosenkranz, and Barret and Chung. The scattering phase matrix involved in the MRTM is found using Eddington's two stream approximation. The surface effects due to winds and foam are included through the ocean surface emissivity model. Rainfall rates are then inverted from MBT using the optimization technique "Leaps and Bounds" and multiple linear regression leading to a relationship between the rainfall rates and MBT. This relationship has been used to infer the oceanic rainfall rates from SMMR data. The VISSR data has been inverted for the rainfall rates using Griffith's scheme. This scheme provides an independent means of estimating rainfall rates for cross checking SMMR estimates. The inferred rainfall rates from both techniques have been plotted on a world map for comparison. A reasonably good correlation has been obtained between the two estimates.

  10. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  11. Estimating the global incidence of traumatic spinal cord injury.

    PubMed

    Fitzharris, M; Cripps, R A; Lee, B B

    2014-02-01

    Population modelling--forecasting. To estimate the global incidence of traumatic spinal cord injury (TSCI). An initiative of the International Spinal Cord Society (ISCoS) Prevention Committee. Regression techniques were used to derive regional and global estimates of TSCI incidence. Using the findings of 31 published studies, a regression model was fitted using a known number of TSCI cases as the dependent variable and the population at risk as the single independent variable. In the process of deriving TSCI incidence, an alternative TSCI model was specified in an attempt to arrive at an optimal way of estimating the global incidence of TSCI. The global incidence of TSCI was estimated to be 23 cases per 1,000,000 persons in 2007 (179,312 cases per annum). World Health Organization's regional results are provided. Understanding the incidence of TSCI is important for health service planning and for the determination of injury prevention priorities. In the absence of high-quality epidemiological studies of TSCI in each country, the estimation of TSCI obtained through population modelling can be used to overcome known deficits in global spinal cord injury (SCI) data. The incidence of TSCI is context specific, and an alternative regression model demonstrated how TSCI incidence estimates could be improved with additional data. The results highlight the need for data standardisation and comprehensive reporting of national level TSCI data. A step-wise approach from the collation of conventional epidemiological data through to population modelling is suggested.

  12. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  13. Survival curve estimation with dependent left truncated data using Cox's model.

    PubMed

    Mackenzie, Todd

    2012-10-19

    The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.

  14. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  15. Pragmatic Approach to Device-Independent Color

    NASA Technical Reports Server (NTRS)

    Brandt, R. D.; Capraro, K. S.

    1995-01-01

    JPL has been producing images of planetary bodies for over 30 years. The results of an effort to implement device-independent color on three types of devices are described. The goal is to produce near the same eye-brain response when the observer views the image produced by each device under the correct lighting conditions. The procedure used to calibrate and obtain each device profile is described.

  16. Independent functions and the geometry of Banach spaces

    NASA Astrophysics Data System (ADS)

    Astashkin, Sergey V.; Sukochev, Fedor A.

    2010-12-01

    The main objective of this survey is to present the `state of the art' of those parts of the theory of independent functions which are related to the geometry of function spaces. The `size' of a sum of independent functions is estimated in terms of classical moments and also in terms of general symmetric function norms. The exposition is centred on the Rosenthal inequalities and their various generalizations and sharp conditions under which the latter hold. The crucial tool here is the recently developed construction of the Kruglov operator. The survey also provides a number of applications to the geometry of Banach spaces. In particular, variants of the classical Khintchine-Maurey inequalities, isomorphisms between symmetric spaces on a finite interval and on the semi-axis, and a description of the class of symmetric spaces with any sequence of symmetrically and identically distributed independent random variables spanning a Hilbert subspace are considered. Bibliography: 87 titles.

  17. Device-independent quantum private query

    NASA Astrophysics Data System (ADS)

    Maitra, Arpita; Paul, Goutam; Roy, Sarbani

    2017-04-01

    In quantum private query (QPQ), a client obtains values corresponding to his or her query only, and nothing else from the server, and the server does not get any information about the queries. V. Giovannetti et al. [Phys. Rev. Lett. 100, 230502 (2008)], 10.1103/PhysRevLett.100.230502 gave the first QPQ protocol and since then quite a few variants and extensions have been proposed. However, none of the existing protocols are device independent; i.e., all of them assume implicitly that the entangled states supplied to the client and the server are of a certain form. In this work, we exploit the idea of a local CHSH game and connect it with the scheme of Y. G. Yang et al. [Quantum Info. Process. 13, 805 (2014)], 10.1007/s11128-013-0692-8 to present the concept of a device-independent QPQ protocol.

  18. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  19. Fully device-independent conference key agreement

    NASA Astrophysics Data System (ADS)

    Ribeiro, Jérémy; Murta, Gláucia; Wehner, Stephanie

    2018-02-01

    We present a security analysis of conference key agreement (CKA) in the most adversarial model of device independence (DI). Our protocol can be implemented by any experimental setup that is capable of performing Bell tests [specifically, the Mermin-Ardehali-Belinskii-Klyshko (MABK) inequality], and security can in principle be obtained for any violation of the MABK inequality that detects genuine multipartite entanglement among the N parties involved in the protocol. As our main tool, we derive a direct physical connection between the N -partite MABK inequality and the Clauser-Horne-Shimony-Holt (CHSH) inequality, showing that certain violations of the MABK inequality correspond to a violation of the CHSH inequality between one of the parties and the other N -1 . We compare the asymptotic key rate for device-independent conference key agreement (DICKA) to the case where the parties use N -1 device-independent quantum key distribution protocols in order to generate a common key. We show that for some regime of noise the DICKA protocol leads to better rates.

  20. Estimation of penetration of forest canopies by Interferometric SAR measurements

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto; Michel, Thierry R.; Harding, David J.

    1995-01-01

    In contrast to traditional Synthetic Aperture Radar (SAR), an Interferometric SAR (INSAR) yields two additional measurements: the phase difference and the correlation between the two interferometric channels. The phase difference has been used to estimate topographic height. For homogeneous surfaces, the correlation depends on the system signal-to-noise (SNR) ratio, the interferometer parameters, and the local slope. In the presence of volume scattering, such as that encountered in vegetation canopies, the correlation between the two channels is also dependent on the degree of penetration of the radiation into the scattering medium. In this paper, we propose a method for removing system and slope effects in order to obtain the decorrelation due to penetration alone. The sensitivities and accuracy of the proposed method are determined by Monte Carlo experiments, and we show that the proposed technique has sufficient sensitivity to provide penetration measurements for airborne SAR systems. Next, we provide a theoretical model to estimate the degree of penetration in a way which is independent of the details of the scattering medium. We also present a model for the correlation from non-homogeneous layers. We assess the sensitivity of the proposed inversion technique to these inhomogeneous situations. Finally, we present a comparison of the interferometric results against in situ data obtained by an airborne laser profilometer which provides a direct measurement of tree height and an estimate of the vegetation density profile in the forested areas around Mt. Adams, WA.

  1. [Hyperspectral Remote Sensing Estimation Models for Pasture Quality].

    PubMed

    Ma, Wei-wei; Gong, Cai-lan; Hu, Yong; Wei, Yong-lin; Li, Long; Liu, Feng-yi; Meng, Peng

    2015-10-01

    Crude protein (CP), crude fat (CFA) and crude fiber (CFI) are key indicators for evaluation of the quality and feeding value of pasture. Hence, identification of these biological contents is an essential practice for animal husbandry. As current approaches to pasture quality estimation are time-consuming and costly, and even generate hazardous waste, a real-time and non-destructive method is therefore developed in this study using pasture canopy hyperspectral data. A field campaign was carried out in August 2013 around Qinghai Lake in order to obtain field spectral properties of 19 types of natural pasture using the ASD Field Spec 3, a field spectrometer that works in the optical region (350-2 500 nm) of the electromagnetic spectrum. In additional to the spectral data, pasture samples were also collected from the field and examined in laboratory to measure the relative concentration of CP (%), CFA (%) and CFI (%). After spectral denoising and smoothing, the relationship of pasture quality parameters with the reflectance spectrum, the first derivatives of reflectance (FDR), band ratio and the wavelet coefficients (WCs) was analyzed respectively. The concentration of CP, CFA and CFI of pasture was found closely correlated with FDR with wavebands centered at 424, 1 668, and 918 nm as well as with the low-scale (scale = 2, 4) Morlet, Coiflets and Gassian WCs. Accordingly, the linear, exponential, and polynomial equations between each pasture variable and FDR or WCs were developed. Validation of the developed equations indicated that the polynomial model with an independent variable of Coiflets WCs (scale = 4, wavelength =1 209 nm), the polynomial model with an independent variable of FDR, and the exponential model with an independent variable of FDR were the optimal model for prediction of concentration of CP, CFA and CFI of pasture, respectively. The R2 of the pasture quality estimation models was between 0.646 and 0.762 at the 0.01 significance level. Results suggest

  2. Graviton propagator from background-independent quantum gravity.

    PubMed

    Rovelli, Carlo

    2006-10-13

    We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.

  3. Age-independent anti-Müllerian hormone (AMH) standard deviation scores to estimate ovarian function.

    PubMed

    Helden, Josef van; Weiskirchen, Ralf

    2017-06-01

    To determine single year age-specific anti-Müllerian hormone (AMH) standard deviation scores (SDS) for women associated to normal ovarian function and different ovarian disorders resulting in sub- or infertility. Determination of particular year median and mean AMH values with standard deviations (SD), calculation of age-independent cut off SDS for the discrimination between normal ovarian function and ovarian disorders. Single-year-specific median, mean, and SD values have been evaluated for the Beckman Access AMH immunoassay. While the decrease of both median and mean AMH values is strongly correlated with increasing age, calculated SDS values have been shown to be age independent with the differentiation between normal ovarian function measured as occurred ovulation with sufficient luteal activity compared with hyperandrogenemic cycle disorders or anovulation associated with high AMH values and reduced ovarian activity or insufficiency associated with low AMH, respectively. These results will be helpful for the treatment of patients and the ventilation of the different reproductive options. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Saputro, Dewi Retno Sari

    2017-03-01

    GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".

  5. Radiation-force-based estimation of acoustic attenuation using harmonic motion imaging (HMI) in phantoms and in vitro livers before and after HIFU ablation

    NASA Astrophysics Data System (ADS)

    Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa

    2015-10-01

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5 ) and in vitro canine livers (n=3 ) were tested, as well as HIFU lesions in in vitro canine livers (n=5 ). Results demonstrated that attenuations obtained from the phantoms showed a good correlation ({{R}2}=0.976 ) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32   ±   0.03 dB cm-1 MHz-1, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58   ±   0.06 dB cm-1 MHz-1) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.

  6. Separate patient serum sodium medians from males and females provide independent information on analytical bias.

    PubMed

    Hansen, Steen Ingemann; Petersen, Per Hyltoft; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2017-10-26

    During monitoring of monthly medians of results from patients undertaken to assess analytical stability in routine laboratory performance, the medians for serum sodium for male and female patients were found to be significantly related. Daily, weekly and monthly patient medians of serum sodium for both male and female patients were calculated from results obtained on samples from the population >18 years on three analysers in the hospital laboratory. The half-range of medians was applied as an estimate of the maximum bias. Further, the ratios between the two medians were calculated. The medians of both genders demonstrated dispersions over time, but they were closely connected in like patterns, which were confirmed by the half-range of the ratios of medians for males and females that varied from 0.36% for daily, 0.14% for weekly and 0.036% for monthly ratios over all instruments. The tight relationship between the gender medians for serum sodium is only possible when raw laboratory data are used for calculation. The two patient medians can be used to confirm both and are useful as independent estimates of analytical bias during constant calibration periods. In contrast to the gender combined median, the estimate of analytical bias can be confirmed further by calculation of the ratios of medians for males and females.

  7. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  8. Patient-specific lean body mass can be estimated from limited-coverage computed tomography images.

    PubMed

    Devriese, Joke; Beels, Laurence; Maes, Alex; van de Wiele, Christophe; Pottel, Hans

    2018-06-01

    In PET/CT, quantitative evaluation of tumour metabolic activity is possible through standardized uptake values, usually normalized for body weight (BW) or lean body mass (LBM). Patient-specific LBM can be estimated from whole-body (WB) CT images. As most clinical indications only warrant PET/CT examinations covering head to midthigh, the aim of this study was to develop a simple and reliable method to estimate LBM from limited-coverage (LC) CT images and test its validity. Head-to-toe PET/CT examinations were retrospectively retrieved and semiautomatically segmented into tissue types based on thresholding of CT Hounsfield units. LC was obtained by omitting image slices. Image segmentation was validated on the WB CT examinations by comparing CT-estimated BW with actual BW, and LBM estimated from LC images were compared with LBM estimated from WB images. A direct method and an indirect method were developed and validated on an independent data set. Comparing LBM estimated from LC examinations with estimates from WB examinations (LBMWB) showed a significant but limited bias of 1.2 kg (direct method) and nonsignificant bias of 0.05 kg (indirect method). This study demonstrates that LBM can be estimated from LC CT images with no significant difference from LBMWB.

  9. Estimating dead-space fraction for secondary analyses of ARDS clinical trials

    PubMed Central

    Beitler, Jeremy R.; Thompson, B. Taylor; Matthay, Michael A.; Talmor, Daniel; Liu, Kathleen D.; Zhuo, Hanjing; Hayden, Douglas; Spragg, Roger G.; Malhotra, Atul

    2015-01-01

    Objective Pulmonary dead-space fraction is one of few lung-specific independent predictors of mortality from acute respiratory distress syndrome (ARDS). However, it is not measured routinely in clinical trials and thus altogether ignored in secondary analyses that shape future research directions and clinical practice. This study sought to validate an estimate of dead-space fraction for use in secondary analyses of clinical trials. Design Analysis of patient-level data pooled from ARDS clinical trials. Four approaches to estimate dead-space fraction were evaluated: three required estimating metabolic rate; one estimated dead-space fraction directly. Setting U.S. academic teaching hospitals. Patients Data from 210 patients across three clinical trials were used to compare performance of estimating equations with measured dead-space fraction. A second cohort of 3,135 patients from six clinical trials without measured dead-space fraction was used to confirm whether estimates independently predicted mortality. Interventions None. Measurements and Main Results Dead-space fraction estimated using the unadjusted Harris-Benedict equation for energy expenditure was unbiased (mean ± SD Harris-Benedict 0.59 ± 0.13; measured 0.60 ± 0.12). This estimate predicted measured dead-space fraction to within ± 0.10 in 70% of patients and ± 0.20 in 95% of patients. Measured dead-space fraction independently predicted mortality (OR 1.36 per 0.05 increase in dead-space fraction, 95% CI 1.10–1.68; p < .01). The Harris-Benedict estimate closely approximated this association with mortality in the same cohort (OR 1.55, 95% CI 1.21–1.98; p < .01) and remained independently predictive of death in the larger ARDSNet cohort. Other estimates predicted measured dead-space fraction or its association with mortality less well. Conclusions Dead-space fraction should be measured in future ARDS clinical trials to facilitate incorporation into secondary analyses. For analyses where dead

  10. The Robustness of LOGIST and BILOG IRT Estimation Programs to Violations of Local Independence.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…

  11. Accounting for independent nondifferential misclassification does not increase certainty that an observed association is in the correct direction.

    PubMed

    Greenland, Sander; Gustafson, Paul

    2006-07-01

    Researchers sometimes argue that their exposure-measurement errors are independent of other errors and are nondifferential with respect to disease, resulting in estimation bias toward the null. Among well-known problems with such arguments are that independence and nondifferentiality are harder to satisfy than ordinarily appreciated (e.g., because of correlation of errors in questionnaire items, and because of uncontrolled covariate effects on error rates); small violations of independence or nondifferentiality may lead to bias away from the null; and, if exposure is polytomous, the bias produced by independent nondifferential error is not always toward the null. The authors add to this list by showing that, in a 2 x 2 table (for which independent nondifferential error produces bias toward the null), accounting for independent nondifferential error does not reduce the p value even though it increases the point estimate. Thus, such accounting should not increase certainty that an association is present.

  12. Resting State Network Estimation in Individual Subjects

    PubMed Central

    Hacker, Carl D.; Laumann, Timothy O.; Szrama, Nicholas P.; Baldassarre, Antonello; Snyder, Abraham Z.

    2014-01-01

    Resting-state functional magnetic resonance imaging (fMRI) has been used to study brain networks associated with both normal and pathological cognitive function. The objective of this work is to reliably compute resting state network (RSN) topography in single participants. We trained a supervised classifier (multi-layer perceptron; MLP) to associate blood oxygen level dependent (BOLD) correlation maps corresponding to pre-defined seeds with specific RSN identities. Hard classification of maps obtained from a priori seeds was highly reliable across new participants. Interestingly, continuous estimates of RSN membership retained substantial residual error. This result is consistent with the view that RSNs are hierarchically organized, and therefore not fully separable into spatially independent components. After training on a priori seed-based maps, we propagated voxel-wise correlation maps through the MLP to produce estimates of RSN membership throughout the brain. The MLP generated RSN topography estimates in individuals consistent with previous studies, even in brain regions not represented in the training data. This method could be used in future studies to relate RSN topography to other measures of functional brain organization (e.g., task-evoked responses, stimulation mapping, and deficits associated with lesions) in individuals. The multi-layer perceptron was directly compared to two alternative voxel classification procedures, specifically, dual regression and linear discriminant analysis; the perceptron generated more spatially specific RSN maps than either alternative. PMID:23735260

  13. Relative azimuth inversion by way of damped maximum correlation estimates

    USGS Publications Warehouse

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  14. Obtaining appropriate interval estimates for age when multiple indicators are used: evaluation of an ad-hoc procedure.

    PubMed

    Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick

    2016-03-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.

  15. Excitations for Rapidly Estimating Flight-Control Parameters

    NASA Technical Reports Server (NTRS)

    Moes, Tim; Smith, Mark; Morelli, Gene

    2006-01-01

    A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the

  16. Thermodynamic-ensemble independence of solvation free energy.

    PubMed

    Chong, Song-Ho; Ham, Sihyun

    2015-02-10

    Solvation free energy is the fundamental thermodynamic quantity in solution chemistry. Recently, it has been suggested that the partial molar volume correction is necessary to convert the solvation free energy determined in different thermodynamic ensembles. Here, we demonstrate ensemble-independence of the solvation free energy on general thermodynamic grounds. Theoretical estimates of the solvation free energy based on the canonical or grand-canonical ensemble are pertinent to experiments carried out under constant pressure without any conversion.

  17. Independent Qualification of the CIAU Tool Based on the Uncertainty Estimate in the Prediction of Angra 1 NPP Inadvertent Load Rejection Transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borges, Ronaldo C.; D'Auria, Francesco; Alvim, Antonio Carlos M.

    2002-07-01

    The Code with - the capability of - Internal Assessment of Uncertainty (CIAU) is a tool proposed by the 'Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione (DIMNP)' of the University of Pisa. Other Institutions including the nuclear regulatory body from Brazil, 'Comissao Nacional de Energia Nuclear', contributed to the development of the tool. The CIAU aims at providing the currently available Relap5/Mod3.2 system code with the integrated capability of performing not only relevant transient calculations but also the related estimates of uncertainty bands. The Uncertainty Methodology based on Accuracy Extrapolation (UMAE) is used to characterize the uncertainty in themore » prediction of system code calculations for light water reactors and is internally coupled with the above system code. Following an overview of the CIAU development, the present paper deals with the independent qualification of the tool. The qualification test is performed by estimating the uncertainty bands that should envelope the prediction of the Angra 1 NPP transient RES-11. 99 originated by an inadvertent complete load rejection that caused the reactor scram when the unit was operating at 99% of nominal power. The current limitation of the 'error' database, implemented into the CIAU prevented a final demonstration of the qualification. However, all the steps for the qualification process are demonstrated. (authors)« less

  18. Conformational states and folding pathways of peptides revealed by principal-independent component analyses.

    PubMed

    Nguyen, Phuong H

    2007-05-15

    Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.

  19. Temperature and redox path of biotite-bearing intrusives: A method of estimation applied to S- and I-type granites from Australia

    NASA Astrophysics Data System (ADS)

    Burkhard, Dorothee J. M.

    1991-05-01

    Estimations of the oxidation state and development of ƒO 2 during magmatic evolution are usually not possible because ƒO 2 is a function of temperature (and pressure). If two independent equations for ƒO 2 = ƒO 2(T) can be obtained, ƒO 2 and the corresponding temperature can be estimated. For biotite-bearing rocks an estimation of ƒO 2 (bio) can be combined with an estimation of ƒO 2 (rock). This latter estimation requires an extrapolation of high-temperature data because low-temperature data are not available. The combination of the two equations provides a quadratic equation in T with the common (negative) solution: Tint= 1/4 c 2{- k1-2 c 1-√(k 1+2 c 1) 2+8 c 2k 2} which permits back calculation of ƒO 2. The usefulness of the method is demonstrated for typical S-type, ilmenite, and I-type, magnetite granites from Australia. Two distinct oxidation states are obtained. It is suggested that the availability of H 2O during granite emplacement largely determines ƒO 2 conditions.

  20. A gauge-independent zeroth-order regular approximation to the exact relativistic Hamiltonian—Formulation and applications

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-01-01

    A simple modification of the zeroth-order regular approximation (ZORA) in relativistic theory is suggested to suppress its erroneous gauge dependence to a high level of approximation. The method, coined gauge-independent ZORA (ZORA-GI), can be easily installed in any existing nonrelativistic quantum chemical package by programming simple one-electron matrix elements for the quasirelativistic Hamiltonian. Results of benchmark calculations obtained with ZORA-GI at the Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory (MP2) level for dihalogens X2 (X=F,Cl,Br,I,At) are in good agreement with the results of four-component relativistic calculations (HF level) and experimental data (MP2 level). ZORA-GI calculations based on MP2 or coupled-cluster theory with single and double perturbations and a perturbative inclusion of triple excitations [CCSD(T)] lead to accurate atomization energies and molecular geometries for the tetroxides of group VIII elements. With ZORA-GI/CCSD(T), an improved estimate for the atomization energy of hassium (Z=108) tetroxide is obtained.

  1. Building unbiased estimators from non-gaussian likelihoods with application to shear estimation

    DOE PAGES

    Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; ...

    2015-01-15

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.« less

  2. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2.« less

  3. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  4. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  5. Kalman filter to update forest cover estimates

    Treesearch

    Raymond L. Czaplewski

    1990-01-01

    The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...

  6. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  7. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  8. Local Estimators for Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh

    2011-01-01

    A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.

  9. Independence.

    ERIC Educational Resources Information Center

    Stephenson, Margaret E.

    2000-01-01

    Discusses the four planes of development and the periods of creation and crystallization within each plane. Identifies the type of independence that should be achieved by the end of the first two planes of development. Maintains that it is through individual work on the environment that one achieves independence. (KB)

  10. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  11. INVESTIGATING DIFFERENCES IN BRAIN FUNCTIONAL NETWORKS USING HIERARCHICAL COVARIATE-ADJUSTED INDEPENDENT COMPONENT ANALYSIS.

    PubMed

    Shi, Ran; Guo, Ying

    2016-12-01

    Human brains perform tasks via complex functional networks consisting of separated brain regions. A popular approach to characterize brain functional networks in fMRI studies is independent component analysis (ICA), which is a powerful method to reconstruct latent source signals from their linear mixtures. In many fMRI studies, an important goal is to investigate how brain functional networks change according to specific clinical and demographic variabilities. Existing ICA methods, however, cannot directly incorporate covariate effects in ICA decomposition. Heuristic post-ICA analysis to address this need can be inaccurate and inefficient. In this paper, we propose a hierarchical covariate-adjusted ICA (hc-ICA) model that provides a formal statistical framework for estimating covariate effects and testing differences between brain functional networks. Our method provides a more reliable and powerful statistical tool for evaluating group differences in brain functional networks while appropriately controlling for potential confounding factors. We present an analytically tractable EM algorithm to obtain maximum likelihood estimates of our model. We also develop a subspace-based approximate EM that runs significantly faster while retaining high accuracy. To test the differences in functional networks, we introduce a voxel-wise approximate inference procedure which eliminates the need of computationally expensive covariance matrix estimation and inversion. We demonstrate the advantages of our methods over the existing method via simulation studies. We apply our method to an fMRI study to investigate differences in brain functional networks associated with post-traumatic stress disorder (PTSD).

  12. 21 CFR 1315.34 - Obtaining an import quota.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Obtaining an import quota. 1315.34 Section 1315.34 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE IMPORTATION AND PRODUCTION QUOTAS... imports, the estimated medical, scientific, and industrial needs of the United States, the establishment...

  13. Estimation of Greenland's Ice Sheet Mass Balance Using ICESat and GRACE Data

    NASA Astrophysics Data System (ADS)

    Slobbe, D.; Ditmar, P.; Lindenbergh, R.

    2007-12-01

    Data of the GRACE gravity mission and the ICESat laser altimetry mission are used to create two independent estimates of Greenland's ice sheet mass balance over the full measurement period. For ICESat data, a processing strategy is developed using the elevation differences of geometrically overlapping footprints of both crossing and repeated tracks. The dataset is cleaned using quality flags defined by the GLAS science team. The cleaned dataset reveals some strong, spatially correlated signals that are shown to be related to physical phenomena. Different processing strategies are used to convert the observed temporal height differences to mass changes for 6 different drainage systems, further divided into a region above and below 2000 meter elevation. The results are compared with other altimetry based mass balance estimates. In general, the obtained results confirm trends discovered by others, but we also show that the choice of processing strategy strongly influences our results, especially for the areas below 2000 meter. Furthermore, GRACE based monthly variations of the Earth's gravity field as processed by CNES, CSR, GFZ and DEOS are used to estimate the mass balance change for North and South Greenland. It is shown that our results are comparable with recently published GRACE estimates (mascon solutions). On the other hand, the estimates based on GRACE data are only partly confirmed by the ICESat estimates. Possible explanations for the obvious differences will be discussed.

  14. Classification of independent components of EEG into multiple artifact classes.

    PubMed

    Frølich, Laura; Andersen, Tobias S; Mørup, Morten

    2015-01-01

    In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them. Copyright © 2014 Society for Psychophysiological Research.

  15. Multi-spectrometer calibration transfer based on independent component analysis.

    PubMed

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  16. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  17. Comparison of model estimated and measured direct-normal solar irradiance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halthore, R.N.; Schwartz, S.E.; Michalsky, J.J.

    1997-12-01

    Direct-normal solar irradiance (DNSI), the energy in the solar spectrum incident in unit time at the Earth{close_quote}s surface on a unit area perpendicular to the direction to the Sun, depends only on atmospheric extinction of solar energy without regard to the details of the extinction, whether absorption or scattering. Here we report a set of closure experiments performed in north central Oklahoma in April 1996 under cloud-free conditions, wherein measured atmospheric composition and aerosol optical thickness are input to a radiative transfer model, MODTRAN 3, to estimate DNSI, which is then compared with measured values obtained with normal incidence pyrheliometersmore » and absolute cavity radiometers. Uncertainty in aerosol optical thickness (AOT) dominates the uncertainty in DNSI calculation. AOT measured by an independently calibrated Sun photometer and a rotating shadow-band radiometer agree to within the uncertainties of each measurement. For 36 independent comparisons the agreement between measured and model-estimated values of DNSI falls within the combined uncertainties in the measurement (0.3{endash}0.7{percent}) and model calculation (1.8{percent}), albeit with a slight average model underestimate ({minus}0.18{plus_minus}0.94){percent}; for a DNSI of 839Wm{sup {minus}2} this corresponds to {minus}1.5{plus_minus}7.9Wm{sup {minus}2}. The agreement is nearly independent of air mass and water-vapor path abundance. These results thus establish the accuracy of the current knowledge of the solar spectrum, its integrated power, and the atmospheric extinction as a function of wavelength as represented in MODTRAN 3. An important consequence is that atmospheric absorption of short-wave energy is accurately parametrized in the model to within the above uncertainties. {copyright} 1997 American Geophysical Union« less

  18. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is

  19. The independent relationship between triglycerides and coronary heart disease.

    PubMed

    Morrison, Alan; Hokanson, John E

    2009-01-01

    The aim was to review epidemiologic studies to reassess whether serum levels of triglycerides should be considered independently of high-density lipoprotein-cholesterol (HDL-C) as a predictor of coronary heart disease (CHD). We systematically reviewed population-based cohort studies in which baseline serum levels of triglycerides and HDL-C were included as explanatory variables in multivariate analyses with the development of CHD (coronary events or coronary death) as dependent variable. A total of 32 unique reports describing 38 cohorts were included. The independent association between elevated triglycerides and risk of CHD was statistically significant in 16 of 30 populations without pre-existing CHD. Among populations with diabetes mellitus or pre-existing CHD, or the elderly, triglycerides were not significantly independently associated with CHD in any of 8 cohorts. Triglycerides and HDL-C were mutually exclusive predictors of coronary events in 12 of 20 analyses of patients without pre-existing CHD. Epidemiologic studies provide evidence of an association between triglycerides and the development of primary CHD independently of HDL-C. Evidence of an inverse relationship between triglycerides and HDL-C suggests that both should be considered in CHD risk estimation and as targets for intervention.

  20. Independent Research (IR) and Independent Exploratory Development (IED)

    DTIC Science & Technology

    1994-08-01

    in the Workplace . Independent research/independent exploratory development, IR/IED...Exclusion Rate differences Over a Cut Score Domain, An Examination of Cognitive and Motivational Effects of Employee Interventions, and Cultural Diversity

  1. Detecting coupled collective motions in protein by independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Joti, Yasumasa; Kitao, Akio

    2010-11-01

    Protein dynamics evolves in a high-dimensional space, comprising aharmonic, strongly correlated motional modes. Such correlation often plays an important role in analyzing protein function. In order to identify significantly correlated collective motions, here we employ independent subspace analysis based on the subspace joint approximate diagonalization of eigenmatrices algorithm for the analysis of molecular dynamics (MD) simulation trajectories. From the 100 ns MD simulation of T4 lysozyme, we extract several independent subspaces in each of which collective modes are significantly correlated, and identify the other modes as independent. This method successfully detects the modes along which long-tailed non-Gaussian probability distributions are obtained. Based on the time cross-correlation analysis, we identified a series of events among domain motions and more localized motions in the protein, indicating the connection between the functionally relevant phenomena which have been independently revealed by experiments.

  2. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  3. Analysis of percent density estimates from digital breast tomosynthesis projection images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.

    2007-03-01

    Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PD M) and from individual DBT projections (PD T). We observed good agreement between PD M and PD T from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PD T estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.

  4. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  5. Comparison of TOPEX/Poseidon orbit determination solutions obtained by the Goddard Space Flight Center Flight Dynamics Division and Precision Orbit Determination Teams

    NASA Technical Reports Server (NTRS)

    Doll, C.; Mistretta, G.; Hart, R.; Oza, D.; Cox, C.; Nemesure, M.; Bolvin, D.; Samii, Mina V.

    1993-01-01

    Orbit determination results are obtained by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) using the Goddard Trajectory Determination System (GTDS) and a real-time extended Kalman filter estimation system to process Tracking Data and Relay Satellite (TDRS) System (TDRSS) measurements in support of the Ocean Topography Experiment (TOPEX)/Poseidon spacecraft navigation and health and safety operations. GTDS is the operational orbit determination system used by the FDD, and the extended Kalman fliter was implemented in an analysis prototype system, the Real-Time Orbit Determination System/Enhanced (RTOD/E). The Precision Orbit Determination (POD) team within the GSFC Space Geodesy Branch generates an independent set of high-accuracy trajectories to support the TOPEX/Poseidon scientific data. These latter solutions use the Geodynamics (GEODYN) orbit determination system with laser ranging tracking data. The TOPEX/Poseidon trajectories were estimated for the October 22 - November 1, 1992, timeframe, for which the latest preliminary POD results were available. Independent assessments were made of the consistencies of solutions produced by the batch and sequential methods. The batch cases were assessed using overlap comparisons, while the sequential cases were assessed with covariances and the first measurement residuals. The batch least-squares and forward-filtered RTOD/E orbit solutions were compared with the definitive POD orbit solutions. The solution differences were generally less than 10 meters (m) for the batch least squares and less than 18 m for the sequential estimation solutions. The differences among the POD, GTDS, and RTOD/E solutions can be traced to differences in modeling and tracking data types, which are being analyzed in detail.

  6. Accounting for imperfect detection of groups and individuals when estimating abundance.

    PubMed

    Clement, Matthew J; Converse, Sarah J; Royle, J Andrew

    2017-09-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data

  7. Accounting for imperfect detection of groups and individuals when estimating abundance

    USGS Publications Warehouse

    Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew

    2017-01-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data

  8. Census-independent population mapping in northern Nigeria

    DOE PAGES

    Weber, Eric M.; Seaman, Vincent Y.; Stewart, Robert N.; ...

    2017-10-21

    Although remote sensing has long been used to aid in the estimation of population, it has usually been in the context of spatial disaggregation of national census data, with the census counts serving both as observational data for specifying models and as constraints on model outputs. Here we present a framework for estimating populations from the bottom up, entirely independently of national census data, a critical need in areas without recent and reliable census data. To make observations of population density, we replace national census data with a microcensus, in which we enumerate population for a sample of small areasmore » within the states of Kano and Kaduna in northern Nigeria. Using supervised texture-based classifiers with very high resolution satellite imagery, we produce a binary map of human settlement at 8-meter resolution across the two states and then a more refined classification consisting of 7 residential types and 1 non-residential type. Using the residential types and a model linking them to the population density observations, we produce population estimates across the two states in a gridded raster format, at approximately 90-meter resolution. We also demonstrate a simulation framework for capturing uncertainty and presenting estimates as prediction intervals for any region of interest of any size and composition within the study region. As a result, used in concert with previously published demographic estimates, our population estimates allowed for predictions of the population under 5 in ten administrative wards that fit strongly with reference data collected during polio vaccination campaigns.« less

  9. Census-independent population mapping in northern Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Eric M.; Seaman, Vincent Y.; Stewart, Robert N.

    Although remote sensing has long been used to aid in the estimation of population, it has usually been in the context of spatial disaggregation of national census data, with the census counts serving both as observational data for specifying models and as constraints on model outputs. Here we present a framework for estimating populations from the bottom up, entirely independently of national census data, a critical need in areas without recent and reliable census data. To make observations of population density, we replace national census data with a microcensus, in which we enumerate population for a sample of small areasmore » within the states of Kano and Kaduna in northern Nigeria. Using supervised texture-based classifiers with very high resolution satellite imagery, we produce a binary map of human settlement at 8-meter resolution across the two states and then a more refined classification consisting of 7 residential types and 1 non-residential type. Using the residential types and a model linking them to the population density observations, we produce population estimates across the two states in a gridded raster format, at approximately 90-meter resolution. We also demonstrate a simulation framework for capturing uncertainty and presenting estimates as prediction intervals for any region of interest of any size and composition within the study region. As a result, used in concert with previously published demographic estimates, our population estimates allowed for predictions of the population under 5 in ten administrative wards that fit strongly with reference data collected during polio vaccination campaigns.« less

  10. Evaluation of energy balance closure adjustment methods by independent evapotranspiration estimates from lysimeters and hydrological simulations

    DOE PAGES

    Mauder, Matthias; Genzel, Sandra; Fu, Jin; ...

    2017-11-10

    Here, we report non-closure of the surface energy balance is a frequently observed phenomenon of hydrometeorological field measurements, when using the eddy-covariance method, which can be ascribed to an underestimation of the turbulent fluxes. Several approaches have been proposed in order to adjust the measured fluxes for this apparent systematic error. However, there are uncertainties about partitioning of the energy balance residual between the sensible and latent heat flux and whether such a correction should be applied on 30-minute data or longer time scales. The data for this study originate from two grassland sites in southern Germany, where measurements frommore » weighable lysimeters are available as reference. The adjusted evapotranspiration rates are also compared with joint energy and water balance simulations using a physically-based distributed hydrological model. We evaluate two adjustment methods: the first one preserves the Bowen ratio and the correction factor is determined on a daily basis. The second one attributes a smaller portion of the residual energy to the latent heat flux than to the sensible heat flux for closing the energy balance for every 30-minute flux integration interval. Both methods lead to an improved agreement of the eddy-covariance based fluxes with the independent lysimeter estimates and the physically-based model simulations. The first method results in a better comparability of evapotranspiration rates, and the second method leads to a smaller overall bias. These results are similar between both sites despite considerable differences in terrain complexity and grassland management. Moreover, we found that a daily adjustment factor leads to less scatter than a complete partitioning of the residual for every half-hour time interval. Lastly, the vertical temperature gradient in the surface layer and friction velocity were identified as important predictors for a potential future parameterization of the energy balance

  11. Evaluation of energy balance closure adjustment methods by independent evapotranspiration estimates from lysimeters and hydrological simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauder, Matthias; Genzel, Sandra; Fu, Jin

    Here, we report non-closure of the surface energy balance is a frequently observed phenomenon of hydrometeorological field measurements, when using the eddy-covariance method, which can be ascribed to an underestimation of the turbulent fluxes. Several approaches have been proposed in order to adjust the measured fluxes for this apparent systematic error. However, there are uncertainties about partitioning of the energy balance residual between the sensible and latent heat flux and whether such a correction should be applied on 30-minute data or longer time scales. The data for this study originate from two grassland sites in southern Germany, where measurements frommore » weighable lysimeters are available as reference. The adjusted evapotranspiration rates are also compared with joint energy and water balance simulations using a physically-based distributed hydrological model. We evaluate two adjustment methods: the first one preserves the Bowen ratio and the correction factor is determined on a daily basis. The second one attributes a smaller portion of the residual energy to the latent heat flux than to the sensible heat flux for closing the energy balance for every 30-minute flux integration interval. Both methods lead to an improved agreement of the eddy-covariance based fluxes with the independent lysimeter estimates and the physically-based model simulations. The first method results in a better comparability of evapotranspiration rates, and the second method leads to a smaller overall bias. These results are similar between both sites despite considerable differences in terrain complexity and grassland management. Moreover, we found that a daily adjustment factor leads to less scatter than a complete partitioning of the residual for every half-hour time interval. Lastly, the vertical temperature gradient in the surface layer and friction velocity were identified as important predictors for a potential future parameterization of the energy balance

  12. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    NASA Technical Reports Server (NTRS)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  13. Similar estimates of temperature impacts on global wheat yield by three independent methods

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  14. Normal forms of dispersive scalar Poisson brackets with two independent variables

    NASA Astrophysics Data System (ADS)

    Carlet, Guido; Casati, Matteo; Shadrin, Sergey

    2018-03-01

    We classify the dispersive Poisson brackets with one dependent variable and two independent variables, with leading order of hydrodynamic type, up to Miura transformations. We show that, in contrast to the case of a single independent variable for which a well-known triviality result exists, the Miura equivalence classes are parametrised by an infinite number of constants, which we call numerical invariants of the brackets. We obtain explicit formulas for the first few numerical invariants.

  15. Buying and Selling Prices of Investments: Configural Weight Model of Interactions Predicts Violations of Joint Independence.

    PubMed

    Birnbaum; Zimmermann

    1998-05-01

    Judges evaluated buying and selling prices of hypothetical investments, based on the previous price of each investment and estimates of the investment's future value given by advisors of varied expertise. Effect of a source's estimate varied in proportion to the source's expertise, and it varied inversely with the number and expertise of other sources. There was also a configural effect in which the effect of a source's estimate was affected by the rank order of that source's estimate, in relation to other estimates of the same investment. These interactions were fit with a configural weight averaging model in which buyers and sellers place different weights on estimates of different ranks. This model implies that one can design a new experiment in which there will be different violations of joint independence in different viewpoints. Experiment 2 confirmed patterns of violations of joint independence predicted from the model fit in Experiment 1. Experiment 2 also showed that preference reversals between viewpoints can be predicted by the model of Experiment 1. Configural weighting provides a better account of buying and selling prices than either of two models of loss aversion or the theory of anchoring and insufficient adjustment. Copyright 1998 Academic Press.

  16. A review of models and micrometeorological methods used to estimate wetland evapotranspiration

    USGS Publications Warehouse

    Drexler, J.Z.; Snyder, R.L.; Spano, D.; Paw, U.K.T.

    2004-01-01

    Within the past decade or so, the accuracy of evapotranspiration (ET) estimates has improved due to new and increasingly sophisticated methods. Yet despite a plethora of choices concerning methods, estimation of wetland ET remains insufficiently characterized due to the complexity of surface characteristics and the diversity of wetland types. In this review, we present models and micrometeorological methods that have been used to estimate wetland ET and discuss their suitability for particular wetland types. Hydrological, soil monitoring and lysimetric methods to determine ET are not discussed. Our review shows that, due to the variability and complexity of wetlands, there is no single approach that is the best for estimating wetland ET. Furthermore, there is no single foolproof method to obtain an accurate, independent measure of wetland ET. Because all of the methods reviewed, with the exception of eddy covariance and LIDAR, require measurements of net radiation (Rn) and soil heat flux (G), highly accurate measurements of these energy components are key to improving measurements of wetland ET. Many of the major methods used to determine ET can be applied successfully to wetlands of uniform vegetation and adequate fetch, however, certain caveats apply. For example, with accurate Rn and G data and small Bowen ratio (??) values, the Bowen ratio energy balance method can give accurate estimates of wetland ET. However, large errors in latent heat flux density can occur near sunrise and sunset when the Bowen ratio ?? ??? - 1??0. The eddy covariance method provides a direct measurement of latent heat flux density (??E) and sensible heat flux density (II), yet this method requires considerable expertise and expensive instrumentation to implement. A clear advantage of using the eddy covariance method is that ??E can be compared with Rn-G H, thereby allowing for an independent test of accuracy. The surface renewal method is inexpensive to replicate and, therefore, shows

  17. Ridge: a computer program for calculating ridge regression estimates

    Treesearch

    Donald E. Hilt; Donald W. Seegrist

    1977-01-01

    Least-squares coefficients for multiple-regression models may be unstable when the independent variables are highly correlated. Ridge regression is a biased estimation procedure that produces stable estimates of the coefficients. Ridge regression is discussed, and a computer program for calculating the ridge coefficients is presented.

  18. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  19. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  20. Variation in fluxes estimated from nitrogen isotope discrimination corresponds with independent measures of nitrogen flux in Populus balsamifera L.

    PubMed

    Kalcsits, Lee A; Guy, Robert D

    2016-02-01

    Acquisition of mineral nitrogen by roots from the surrounding environment is often not completely efficient, in which a variable amount of leakage (efflux) relative to gross uptake (influx) occurs. The efflux/influx ratio (E/I) is, therefore, inversely related to the efficiency of nutrient uptake at the root level. Time-integrated estimates of E/I and other nitrogen-use traits may be obtainable from variation in stable isotope ratios or through compartmental analysis of tracer efflux (CATE) using radioactive or stable isotopes. To compare these two methods, Populus balsamifera L. genotypes were selected, a priori, for high or low nitrogen isotope discrimination. Vegetative cuttings were grown hydroponically, and E/I was calculated using an isotope mass balance model (IMB) and compared to E/I calculated using (15) N CATE. Both methods indicated that plants grown with ammonium had greater E/I than nitrate-grown plants. Genotypes with high or low E/I using CATE also had similarly high or low estimates of E/I using IMB, respectively. Genotype-specific means were linearly correlated (r = 0.77; P = 0.0065). Discrepancies in E/I between methods may reflect uncertainties in discrimination factors for the assimilatory enzymes, or temporal differences in uptake patterns. By utilizing genotypes with known variation in nitrogen isotope discrimination, a relationship between nitrogen isotope discrimination and bidirectional nitrogen fluxes at the root level was observed. © 2015 John Wiley & Sons Ltd.

  1. Measurement-device-independent entanglement-based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Yang, Xiuqing; Wei, Kejin; Ma, Haiqiang; Sun, Shihai; Liu, Hongwei; Yin, Zhenqiang; Li, Zuohan; Lian, Shibin; Du, Yungang; Wu, Lingan

    2016-05-01

    We present a quantum key distribution protocol in a model in which the legitimate users gather statistics as in the measurement-device-independent entanglement witness to certify the sources and the measurement devices. We show that the task of measurement-device-independent quantum communication can be accomplished based on monogamy of entanglement, and it is fairly loss tolerate including source and detector flaws. We derive a tight bound for collective attacks on the Holevo information between the authorized parties and the eavesdropper. Then with this bound, the final secret key rate with the source flaws can be obtained. The results show that long-distance quantum cryptography over 144 km can be made secure using only standard threshold detectors.

  2. Phanerozoic marine diversity: rock record modelling provides an independent test of large-scale trends.

    PubMed

    Smith, Andrew B; Lloyd, Graeme T; McGowan, Alistair J

    2012-11-07

    Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling-a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches.

  3. Economic Independence, Economic Status, and Empty Nest in Midlife Marital Disruption.

    ERIC Educational Resources Information Center

    Hiedemann, Bridget; Suhomlinova, Olga; O'Rand, Angela M.

    1998-01-01

    The risk of separation or divorce late in the marital career is examined from a family development perspective. A hazards framework is used to estimate the effects of women's economic independence, couples' economic status, and family life course factors on the risk of middle-age separation or divorce. (Author/EMK)

  4. Broadband assessment of degree-2 gravitational changes from GRACE and other estimates, 2002-2015

    NASA Astrophysics Data System (ADS)

    Chen, J. L.; Wilson, C. R.; Ries, J. C.

    2016-03-01

    Space geodetic measurements, including the Gravity Recovery and Climate Experiment (GRACE), satellite laser ranging (SLR), and Earth rotation provide independent and increasingly accurate estimates of variations in Earth's gravity field Stokes coefficients ΔC21, ΔS21, and ΔC20. Mass redistribution predicted by climate models provides another independent estimate of air and water contributions to these degree-2 changes. SLR has been a successful technique in measuring these low-degree gravitational changes. Broadband comparisons of independent estimates of ΔC21, ΔS21, and ΔC20 from GRACE, SLR, Earth rotation, and climate models during the GRACE era from April 2002 to April 2015 show that the current GRACE release 5 solutions of ΔC21 and ΔS21 provided by the Center for Space Research (CSR) are greatly improved over earlier solutions and agree remarkably well with other estimates, especially on ΔS21 estimates. GRACE and Earth rotation ΔS21 agreement is exceptionally good across a very broad frequency band from intraseasonal, seasonal, to interannual and decadal periods. SLR ΔC20 estimates remain superior to GRACE and Earth rotation estimates, due to the large uncertainty in GRACE ΔC20 solutions and particularly high sensitivity of Earth rotation ΔC20 estimates to errors in the wind fields. With several estimates of ΔC21, ΔS21, and ΔC20 variations, it is possible to estimate broadband noise variance and noise power spectra in each, given reasonable assumptions about noise independence. The GRACE CSR release 5 solutions clearly outperform other estimates of ΔC21 and ΔS21 variations with the lowest noise levels over a broad band of frequencies.

  5. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  6. Global Marine Productivity and Living-Phytoplankton Carbon Biomass Estimated from a Physiological Growth Model

    NASA Astrophysics Data System (ADS)

    Arteaga, L.; Pahlow, M.; Oschlies, A.

    2016-02-01

    Primay production by marine phytoplankton essentially drives the oceanic biological carbon pump. Global productivity estimates are commonly founded on chlorophyll-based primary production models. However, a major drawback of most of these models is that variations in chlorophyll concentration do not necessarily account for changes in phytoplankton biomass resulting from the physiological regulation of the chlorophyll-to-carbon ratio (Chl:C). Here we present phytoplankton production rates and surface phytoplankton C concentrations for the global ocean for 2005-2010, obtained by combining satellite Chl observations with a mechanistic model for the acclimation of phytoplankton stoichiometry to variations in nutrients, light and temperature. We compare our inferred phytoplankton C concentrations with an independent estimate of surface particulate organic carbon (POC) to identify for the first time the global contribution of living phytoplankton to total POC in the surface ocean. Our annual primary production (46 Pg C yr-1) is in good agreement with other C-based model estimates obtained from satellite observations. We find that most of the oligotrophic surface ocean is dominated by living phytoplankton biomass (between 30-70% of total particulate carbon). Lower contributions are found in the tropical Pacific (10-30% phytoplankton) and the Southern Ocean (≈ 10%). Our method provides a novel analytical tool for identifying changes in marine plankton communities and carbon cycling.

  7. The Sensitivity of Derived Estimates to the Measurement Quality Objectives for Independent Variables

    Treesearch

    Francis A. Roesch

    2005-01-01

    The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Objectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...

  8. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  9. Estimating the extent of impervious surfaces and turf grass across large regions

    USGS Publications Warehouse

    Claggett, Peter; Irani, Frederick M.; Thompson, Renee L.

    2013-01-01

    The ability of researchers to accurately assess the extent of impervious and pervious developed surfaces, e.g., turf grass, using land-cover data derived from Landsat satellite imagery in the Chesapeake Bay watershed is limited due to the resolution of the data and systematic discrepancies between developed land-cover classes, surface mines, forests, and farmlands. Estimates of impervious surface and turf grass area in the Mid-Atlantic, United States that were based on 2006 Landsat-derived land-cover data were substantially lower than estimates based on more authoritative and independent sources. New estimates of impervious surfaces and turf grass area derived using land-cover data combined with ancillary information on roads, housing units, surface mines, and sampled estimates of road width and residential impervious area were up to 57 and 45% higher than estimates based strictly on land-cover data. These new estimates closely approximate estimates derived from authoritative and independent sources in developed counties.

  10. The independent relationship between triglycerides and coronary heart disease

    PubMed Central

    Morrison, Alan; Hokanson, John E

    2009-01-01

    Aims: The aim was to review epidemiologic studies to reassess whether serum levels of triglycerides should be considered independently of high-density lipoprotein-cholesterol (HDL-C) as a predictor of coronary heart disease (CHD). Methods and results: We systematically reviewed population-based cohort studies in which baseline serum levels of triglycerides and HDL-C were included as explanatory variables in multivariate analyses with the development of CHD (coronary events or coronary death) as dependent variable. A total of 32 unique reports describing 38 cohorts were included. The independent association between elevated triglycerides and risk of CHD was statistically significant in 16 of 30 populations without pre-existing CHD. Among populations with diabetes mellitus or pre-existing CHD, or the elderly, triglycerides were not significantly independently associated with CHD in any of 8 cohorts. Triglycerides and HDL-C were mutually exclusive predictors of coronary events in 12 of 20 analyses of patients without pre-existing CHD. Conclusions: Epidemiologic studies provide evidence of an association between triglycerides and the development of primary CHD independently of HDL-C. Evidence of an inverse relationship between triglycerides and HDL-C suggests that both should be considered in CHD risk estimation and as targets for intervention. PMID:19436658

  11. The sensitivity of derived estimates to the measurment quality objectives for independent variables

    Treesearch

    Francis A. Roesch

    2002-01-01

    The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Ob~ectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...

  12. Improving The Discipline of Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Piland, William M.; Pine, David J.; Wilson, Delano M.

    2000-01-01

    The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning many program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility. This paper presents the plans for the newly established role. Described is how the Independent Program Assessment Office, working with all NASA Centers, NASA Headquarters, other Government agencies, and industry, is focused on creating cost estimation and analysis as a professional discipline that will be recognized equally with the technical disciplines needed to design new space and aeronautics activities. Investments in selected, new analysis tools, creating advanced training opportunities for analysts, and developing career paths for future analysts engaged in the discipline are all elements of the plan. Plans also include increasing the human resources available to conduct independent cost analysis of Agency programs during their formulation, to improve near-term capability to conduct economic cost-benefit assessments, to support NASA management's decision process, and to provide cost analysis results emphasizing "full-cost" and "full-life cycle" considerations. The Agency cost analysis improvement plan has been approved for implementation starting this calendar year. Adequate financial

  13. Effect of non-normality on test statistics for one-way independent groups designs.

    PubMed

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  14. Precision and accuracy of age estimates obtained from anal fin spines, dorsal fin spines, and sagittal otoliths for known-age largemouth bass

    USGS Publications Warehouse

    Klein, Zachary B.; Bonvechio, Timothy F.; Bowen, Bryant R.; Quist, Michael C.

    2017-01-01

    Sagittal otoliths are the preferred aging structure for Micropterus spp. (black basses) in North America because of the accurate and precise results produced. Typically, fisheries managers are hesitant to use lethal aging techniques (e.g., otoliths) to age rare species, trophy-size fish, or when sampling in small impoundments where populations are small. Therefore, we sought to evaluate the precision and accuracy of 2 non-lethal aging structures (i.e., anal fin spines, dorsal fin spines) in comparison to that of sagittal otoliths from known-age Micropterus salmoides (Largemouth Bass; n = 87) collected from the Ocmulgee Public Fishing Area, GA. Sagittal otoliths exhibited the highest concordance with true ages of all structures evaluated (coefficient of variation = 1.2; percent agreement = 91.9). Similarly, the low coefficient of variation (0.0) and high between-reader agreement (100%) indicate that age estimates obtained from sagittal otoliths were the most precise. Relatively high agreement between readers for anal fin spines (84%) and dorsal fin spines (81%) suggested the structures were relatively precise. However, age estimates from anal fin spines and dorsal fin spines exhibited low concordance with true ages. Although use of sagittal otoliths is a lethal technique, this method will likely remain the standard for aging Largemouth Bass and other similar black bass species.

  15. Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.

    PubMed

    Du, P; Walling, D E

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by

  16. Ultraspectral sounding retrieval error budget and estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping

    2011-11-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).

  17. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  18. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%

    PubMed Central

    Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.

    2016-01-01

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818

  19. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.

    PubMed

    Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M

    2016-09-13

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.

  20. Seasonal estimates of riparian evapotranspiration using remote and in situ measurements

    USGS Publications Warehouse

    Goodrich, D.C.; Scott, R.; Qi, J.; Goff, B.; Unkrich, C.L.; Moran, M.S.; Williams, D.; Schaeffer, S.; Snyder, K.; MacNish, R.; Maddock, T.; Pool, D.; Chehbouni, A.; Cooper, D.I.; Eichinger, W.E.; Shuttleworth, W.J.; Kerr, Y.; Marsett, R.; Ni, W.

    2000-01-01

    estimates for the riparian corridor were obtained. To validate these models, a 90-day pre-monsoon water balance over a 10 km section of the river was carried out. All components of the water balance, including riparian ET, were independently estimated. The closure of the water balance was roughly 5% of total inflows. The ET models were then used to provide riparian ET estimates over the entire corridor for the growing season. These estimates were approximately 14% less than those obtained from the most recent groundwater model of the basin for a comparable river reach.

  1. Linking soil type and rainfall characteristics towards estimation of surface evaporative capacitance

    NASA Astrophysics Data System (ADS)

    Or, D.; Bickel, S.; Lehmann, P.

    2017-12-01

    Separation of evapotranspiration (ET) to evaporation (E) and transpiration (T) components for attribution of surface fluxes or for assessment of isotope fractionation in groundwater remains a challenge. Regional estimates of soil evaporation often rely on plant-based (Penman-Monteith) ET estimates where is E is obtained as a residual or a fraction of potential evaporation. We propose a novel method for estimating E from soil-specific properties, regional rainfall characteristics and considering concurrent internal drainage that shelters soil water from evaporation. A soil-dependent evaporative characteristic length defines a depth below which soil water cannot be pulled to the surface by capillarity; this depth determines the maximal soil evaporative capacitance (SEC). The SEC is recharged by rainfall and subsequently emptied by competition between drainage and surface evaporation (considering canopy interception evaporation). We show that E is strongly dependent on rainfall characteristics (mean annual, number of storms) and soil textural type, with up to 50% of rainfall lost to evaporation in loamy soil. The SEC concept applied to different soil types and climatic regions offers direct bounds on regional surface evaporation independent of plant-based parameterization or energy balance calculations.

  2. In Silico Estimation of Skin Concentration Following the Dermal Exposure to Chemicals.

    PubMed

    Hatanaka, Tomomi; Yoshida, Shun; Kadhum, Wesam R; Todo, Hiroaki; Sugibayashi, Kenji

    2015-12-01

    To develop an in silico method based on Fick's law of diffusion to estimate the skin concentration following dermal exposure to chemicals with a wide range of lipophilicity. Permeation experiments of various chemicals were performed through rat and porcine skin. Permeation parameters, namely, permeability coefficient and partition coefficient, were obtained by the fitting of data to two-layered and one-layered diffusion models for whole and stripped skin. The mean skin concentration of chemicals during steady-state permeation was calculated using the permeation parameters and compared with the observed values. All permeation profiles could be described by the diffusion models. The estimated skin concentrations of chemicals using permeation parameters were close to the observed levels and most data fell within the 95% confidence interval for complete prediction. The permeability coefficient and partition coefficient for stripped skin were almost constant, being independent of the permeant's lipophilicity. Skin concentration following dermal exposure to various chemicals can be accurately estimated based on Fick's law of diffusion. This method should become a useful tool to assess the efficacy of topically applied drugs and cosmetic ingredients, as well as the risk of chemicals likely to cause skin disorders and diseases.

  3. Robust artifactual independent component classification for BCI practitioners.

    PubMed

    Winkler, Irene; Brandl, Stephanie; Horn, Franziska; Waldburger, Eric; Allefeld, Carsten; Tangermann, Michael

    2014-06-01

    EEG artifacts of non-neural origin can be separated from neural signals by independent component analysis (ICA). It is unclear (1) how robustly recently proposed artifact classifiers transfer to novel users, novel paradigms or changed electrode setups, and (2) how artifact cleaning by a machine learning classifier impacts the performance of brain-computer interfaces (BCIs). Addressing (1), the robustness of different strategies with respect to the transfer between paradigms and electrode setups of a recently proposed classifier is investigated on offline data from 35 users and 3 EEG paradigms, which contain 6303 expert-labeled components from two ICA and preprocessing variants. Addressing (2), the effect of artifact removal on single-trial BCI classification is estimated on BCI trials from 101 users and 3 paradigms. We show that (1) the proposed artifact classifier generalizes to completely different EEG paradigms. To obtain similar results under massively reduced electrode setups, a proposed novel strategy improves artifact classification. Addressing (2), ICA artifact cleaning has little influence on average BCI performance when analyzed by state-of-the-art BCI methods. When slow motor-related features are exploited, performance varies strongly between individuals, as artifacts may obstruct relevant neural activity or are inadvertently used for BCI control. Robustness of the proposed strategies can be reproduced by EEG practitioners as the method is made available as an EEGLAB plug-in.

  4. A method for estimating mean and low flows of streams in national forests of Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1985-01-01

    Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)

  5. Monte Carlo simulations of product distributions and contained metal estimates

    USGS Publications Warehouse

    Gettings, Mark E.

    2013-01-01

    Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.

  6. On an additive partial correlation operator and nonparametric estimation of graphical models.

    PubMed

    Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu

    2016-09-01

    We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.

  7. On an additive partial correlation operator and nonparametric estimation of graphical models

    PubMed Central

    Li, Bing; Zhao, Hongyu

    2016-01-01

    Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689

  8. Estimation of Blood Flow Rates in Large Microvascular Networks

    PubMed Central

    Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.

    2012-01-01

    Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980

  9. Tool independence for the Web Accessibility Quantitative Metric.

    PubMed

    Vigo, Markel; Brajnik, Giorgio; Arrue, Myriam; Abascal, Julio

    2009-07-01

    The Web Accessibility Quantitative Metric (WAQM) aims at accurately measuring the accessibility of web pages. One of the main features of WAQM among others is that it is evaluation tool independent for ranking and accessibility monitoring scenarios. This article proposes a method to attain evaluation tool independence for all foreseeable scenarios. After demonstrating that homepages have a more similar error profile than any other web page in a given web site, 15 homepages were measured with 10,000 different values of WAQM parameters using EvalAccess and LIFT, two automatic evaluation tools for accessibility. A similar procedure was followed with random pages and with several test files obtaining several tuples that minimise the difference between both tools. One thousand four hundred forty-nine web pages from 15 web sites were measured with these tuples and those values that minimised the difference between the tools were selected. Once the WAQM was tuned, the accessibility of 15 web sites was measured with two metrics for web sites, concluding that even if similar values can be produced, obtaining the same scores is undesirable since evaluation tools behave in a different way.

  10. Model-independent confirmation of the Z ( 4430 ) - state

    DOE PAGES

    Aaij, R.; Adeva, B.; Adinolfi, M.; ...

    2015-12-29

    Here, the decay B 0→ψ(2S)K +π - is analyzed using 3 fb -1 of pp collision data collected with the LHCb detector. A model-independent description of the ψ(2S)π mass spectrum is obtained, using as input the Kπ mass spectrum and angular distribution derived directly from data, without requiring a theoretical description of resonance shapes or their interference. The hypothesis that the ψ(2S)π mass spectrum can be described in terms of Kπ reflections alone is rejected with more than 8σ significance. This provides confirmation, in a model-independent way, of the need for an additional resonant component in the mass region ofmore » the Z(4430) - exotic state.« less

  11. Two-port connecting-layer-based sandwiched grating by a polarization-independent design.

    PubMed

    Li, Hongtao; Wang, Bo

    2017-05-02

    In this paper, a two-port connecting-layer-based sandwiched beam splitter grating with polarization-independent property is reported and designed. Such the grating can separate the transmission polarized light into two diffraction orders with equal energies, which can realize the nearly 50/50 output with good uniformity. For the given wavelength of 800 nm and period of 780 nm, a simplified modal method can design a optimal duty cycle and the estimation value of the grating depth can be calculated based on it. In order to obtain the precise grating parameters, a rigorous coupled-wave analysis can be employed to optimize grating parameters by seeking for the precise grating depth and the thickness of connecting layer. Based on the optimized design, a high-efficiency two-port output grating with the wideband performances can be gained. Even more important, diffraction efficiencies are calculated by using two analytical methods, which are proved to be coincided well with each other. Therefore, the grating is significant for practical optical photonic element in engineering.

  12. Electrical conduction of organic ultrathin films evaluated by an independently driven double-tip scanning tunneling microscope.

    PubMed

    Takami, K; Tsuruta, S; Miyake, Y; Akai-Kasaya, M; Saito, A; Aono, M; Kuwahara, Y

    2011-11-02

    The electrical transport properties of organic thin films within the micrometer scale have been evaluated by a laboratory-built independently driven double-tip scanning tunneling microscope, operating under ambient conditions. The two tips were used as point contact electrodes, and current in the range from 0.1 pA to 100 nA flowing between the two tips through the material can be detected. We demonstrated two-dimensional contour mapping of the electrical resistance on a poly(3-octylthiophene) thin films as shown below. The obtained contour map clearly provided an image of two-dimensional electrical conductance between two point electrodes on the poly(3-octylthiophene) thin film. The conductivity of the thin film was estimated to be (1-8) × 10(-6) S cm(-1). Future prospects and the desired development of multiprobe STMs are also discussed.

  13. Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate

    PubMed Central

    Littleton, Ashley C.; Cox, Leah M.; DeFreese, J.D.; Varangis, Eleanna; Lynall, Robert C.; Schmidt, Julianne D.; Marshall, Stephen W.; Guskiewicz, Kevin M.

    2015-01-01

    Abstract Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information related to primary position played and participation in games and practice contacts during the pre-season, regular season, and post-season of each year of their high school, college, and professional football careers. Spring football for college was also included. We calculated contact exposure estimates for 64 former football players (n=32 college football only, n=32 professional and college football). The head impact exposure estimate (HIEE) discriminated between individuals who stopped after college football, and individuals who played professional football (p<0.001). The HIEE measure was independent of concussion history (p=0.82). Estimating total hours of contact exposure may allow for the detection of differences between individuals with variation in subconcussive impacts, regardless of concussion history. This measure is valuable for the surveillance of subconcussive impacts and their associated potential negative effects. PMID:25603189

  14. Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate.

    PubMed

    Kerr, Zachary Y; Littleton, Ashley C; Cox, Leah M; DeFreese, J D; Varangis, Eleanna; Lynall, Robert C; Schmidt, Julianne D; Marshall, Stephen W; Guskiewicz, Kevin M

    2015-07-15

    Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information related to primary position played and participation in games and practice contacts during the pre-season, regular season, and post-season of each year of their high school, college, and professional football careers. Spring football for college was also included. We calculated contact exposure estimates for 64 former football players (n = 32 college football only, n = 32 professional and college football). The head impact exposure estimate (HIEE) discriminated between individuals who stopped after college football, and individuals who played professional football (p < 0.001). The HIEE measure was independent of concussion history (p = 0.82). Estimating total hours of contact exposure may allow for the detection of differences between individuals with variation in subconcussive impacts, regardless of concussion history. This measure is valuable for the surveillance of subconcussive impacts and their associated potential negative effects.

  15. Inter-rater reliability of motor unit number estimates and quantitative motor unit analysis in the tibialis anterior muscle.

    PubMed

    Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L

    2009-05-01

    To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.

  16. RAVE—a Detector-independent vertex reconstruction toolkit

    NASA Astrophysics Data System (ADS)

    Waltenberger, Wolfgang; Mitaroff, Winfried; Moser, Fabian

    2007-10-01

    A detector-independent toolkit for vertex reconstruction (RAVE ) is being developed, along with a standalone framework (VERTIGO ) for testing, analyzing and debugging. The core algorithms represent state of the art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available. VERTIGO = "vertex reconstruction toolkit and interface to generic objects".

  17. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  18. The hydrogen tunneling splitting in malonaldehyde: A full-dimensional time-independent quantum mechanical method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Feng; Ren, Yinghui; Bian, Wensheng, E-mail: bian@iccas.ac.cn

    The accurate time-independent quantum dynamics calculations on the ground-state tunneling splitting of malonaldehyde in full dimensionality are reported for the first time. This is achieved with an efficient method developed by us. In our method, the basis functions are customized for the hydrogen transfer process which has the effect of greatly reducing the size of the final Hamiltonian matrix, and the Lanczos method and parallel strategy are used to further overcome the memory and central processing unit time bottlenecks. The obtained ground-state tunneling splitting of 24.5 cm{sup −1} is in excellent agreement with the benchmark value of 23.8 cm{sup −1}more » computed with the full-dimensional, multi-configurational time-dependent Hartree approach on the same potential energy surface, and we estimate that our reported value has an uncertainty of less than 0.5 cm{sup −1}. Moreover, the role of various vibrational modes strongly coupled to the hydrogen transfer process is revealed.« less

  19. Estimation of Genetic Relationships Between Individuals Across Cohorts and Platforms: Application to Childhood Height.

    PubMed

    Fedko, Iryna O; Hottenga, Jouke-Jan; Medina-Gomez, Carolina; Pappa, Irene; van Beijsterveldt, Catharina E M; Ehli, Erik A; Davies, Gareth E; Rivadeneira, Fernando; Tiemeier, Henning; Swertz, Morris A; Middeldorp, Christel M; Bartels, Meike; Boomsma, Dorret I

    2015-09-01

    Combining genotype data across cohorts increases power to estimate the heritability due to common single nucleotide polymorphisms (SNPs), based on analyzing a Genetic Relationship Matrix (GRM). However, the combination of SNP data across multiple cohorts may lead to stratification, when for example, different genotyping platforms are used. In the current study, we address issues of combining SNP data from different cohorts, the Netherlands Twin Register (NTR) and the Generation R (GENR) study. Both cohorts include children of Northern European Dutch background (N = 3102 + 2826, respectively) who were genotyped on different platforms. We explore imputation and phasing as a tool and compare three GRM-building strategies, when data from two cohorts are (1) just combined, (2) pre-combined and cross-platform imputed and (3) cross-platform imputed and post-combined. We test these three strategies with data on childhood height for unrelated individuals (N = 3124, average age 6.7 years) to explore their effect on SNP-heritability estimates and compare results to those obtained from the independent studies. All combination strategies result in SNP-heritability estimates with a standard error smaller than those of the independent studies. We did not observe significant difference in estimates of SNP-heritability based on various cross-platform imputed GRMs. SNP-heritability of childhood height was on average estimated as 0.50 (SE = 0.10). Introducing cohort as a covariate resulted in ≈2 % drop. Principal components (PCs) adjustment resulted in SNP-heritability estimates of about 0.39 (SE = 0.11). Strikingly, we did not find significant difference between cross-platform imputed and combined GRMs. All estimates were significant regardless the use of PCs adjustment. Based on these analyses we conclude that imputation with a reference set helps to increase power to estimate SNP-heritability by combining cohorts of the same ethnicity genotyped on different platforms. However

  20. Comparable Ages for the Independent Origins of Electrogenesis in African and South American Weakly Electric Fishes

    PubMed Central

    Lavoué, Sébastien; Miya, Masaki; Arnegard, Matthew E.; Sullivan, John P.; Hopkins, Carl D.; Nishida, Mutsumi

    2012-01-01

    One of the most remarkable examples of convergent evolution among vertebrates is illustrated by the independent origins of an active electric sense in South American and African weakly electric fishes, the Gymnotiformes and Mormyroidea, respectively. These groups independently evolved similar complex systems for object localization and communication via the generation and reception of weak electric fields. While good estimates of divergence times are critical to understanding the temporal context for the evolution and diversification of these two groups, their respective ages have been difficult to estimate due to the absence of an informative fossil record, use of strict molecular clock models in previous studies, and/or incomplete taxonomic sampling. Here, we examine the timing of the origins of the Gymnotiformes and the Mormyroidea using complete mitogenome sequences and a parametric Bayesian method for divergence time reconstruction. Under two different fossil-based calibration methods, we estimated similar ages for the independent origins of the Mormyroidea and Gymnotiformes. Our absolute estimates for the origins of these groups either slightly postdate, or just predate, the final separation of Africa and South America by continental drift. The most recent common ancestor of the Mormyroidea and Gymnotiformes was found to be a non-electrogenic basal teleost living more than 85 millions years earlier. For both electric fish lineages, we also estimated similar intervals (16–19 or 22–26 million years, depending on calibration method) between the appearance of electroreception and the origin of myogenic electric organs, providing rough upper estimates for the time periods during which these complex electric organs evolved de novo from skeletal muscle precursors. The fact that the Gymnotiformes and Mormyroidea are of similar age enhances the comparative value of the weakly electric fish system for investigating pathways to evolutionary novelty, as well as the

  1. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  2. Glenn Extreme Environments Rig (GEER) Independent Review

    NASA Technical Reports Server (NTRS)

    Jankovsky, Robert S.; Smiles, Michael D.; George, Mark A.; Ton, Mimi C.; Le, Son K.

    2015-01-01

    The Chief of the Space Science Project Office at Glenn Research Center (GRC) requested support from the NASA Engineering and Safety Center (NESC) to satisfy a request from the Science Mission Directorate (SMD) Associate Administrator and the Planetary Science Division Chief to obtain an independent review of the Glenn Extreme Environments Rig (GEER) and the operational controls in place for mitigating any hazard associated with its operation. This document contains the outcome of the NESC assessment.

  3. A new approach to estimating trends in chlamydia incidence.

    PubMed

    Ali, Hammad; Cameron, Ewan; Drovandi, Christopher C; McCaw, James M; Guy, Rebecca J; Middleton, Melanie; El-Hayek, Carol; Hocking, Jane S; Kaldor, John M; Donovan, Basil; Wilson, David P

    2015-11-01

    Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method for estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shapes of these beta parameters were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour small changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures, that is, prevalence and incidence as reported by other studies. Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ∼120% over 12 years. Nationally, an estimated 356 000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (∼90% increase). We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Multiple Component Event-Related Potential (mcERP) Estimation

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.

  5. Covariance Matrix Evaluations for Independent Mass Fission Yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less

  6. Estimating the magnitude of peak discharges for selected flood frequencies on small streams in South Carolina (1975)

    USGS Publications Warehouse

    Whetstone, B.H.

    1982-01-01

    A program to collect and analyze flood data from small streams in South Carolina was conducted from 1967-75, as a cooperative research project with the South Carolina Department of Highways and Public Transportation and the Federal Highway Administration. As a result of that program, a technique is presented for estimating the magnitude and frequency of floods on small streams in South Carolina with drainage areas ranging in size from 1 to 500 square miles. Peak-discharge data from 74 stream-gaging stations (25 small streams were synthesized, whereas 49 stations had long-term records) were used in multiple regression procedures to obtain equations for estimating magnitude of floods having recurrence intervals of 10, 25, 50, and 100 years on small natural streams. The significant independent variable was drainage area. Equations were developed for the three physiographic provinces of South Carolina (Coastal Plain, Piedmont, and Blue Ridge) and can be used for estimating floods on small streams. (USGS)

  7. Improved automatic estimation of winds at the cloud top of Venus using superposition of cross-correlation surfaces

    NASA Astrophysics Data System (ADS)

    Ikegawa, Shinichi; Horinouchi, Takeshi

    2016-06-01

    Accurate wind observation is a key to study atmospheric dynamics. A new automated cloud tracking method for the dayside of Venus is proposed and evaluated by using the ultraviolet images obtained by the Venus Monitoring Camera onboard the Venus Express orbiter. It uses multiple images obtained successively over a few hours. Cross-correlations are computed from the pair combinations of the images and are superposed to identify cloud advection. It is shown that the superposition improves the accuracy of velocity estimation and significantly reduces false pattern matches that cause large errors. Two methods to evaluate the accuracy of each of the obtained cloud motion vectors are proposed. One relies on the confidence bounds of cross-correlation with consideration of anisotropic cloud morphology. The other relies on the comparison of two independent estimations obtained by separating the successive images into two groups. The two evaluations can be combined to screen the results. It is shown that the accuracy of the screened vectors are very high to the equatorward of 30 degree, while it is relatively low at higher latitudes. Analysis of them supports the previously reported existence of day-to-day large-scale variability at the cloud deck of Venus, and it further suggests smaller-scale features. The product of this study is expected to advance the dynamics of venusian atmosphere.

  8. Slant path rain attenuation and path diversity statistics obtained through radar modeling of rain structure

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1984-01-01

    Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.

  9. Wound Blush Obtainment Is the Most Important Angiographic Endpoint for Wound Healing.

    PubMed

    Utsunomiya, Makoto; Takahara, Mitsuyoshi; Iida, Osamu; Yamauchi, Yasutaka; Kawasaki, Daizo; Yokoi, Yoshiaki; Soga, Yoshimistu; Ohura, Norihiko; Nakamura, Masato

    2017-01-23

    This study aimed to assess the optimal angiographic endpoint of endovascular therapy (EVT) for wound healing. Several reports have demonstrated acceptable patency and limb salvage rates following infrapopliteal interventions for the treatment of critical limb ischemia (CLI). However, the optimal angiographic endpoint of EVT remains unclear. We conducted a subanalysis of the prospective multicenter OLIVE (Endovascular Treatment for Infrainguinal Vessels in Patients with Critical Limb Ischemia) registry investigation assessing patients who received infrainguinal EVT for CLI. We analyzed data from 185 limbs with ischemic ulcerations classified as Rutherford class 5 or 6, managed with EVT alone (i.e., not undergoing bypass surgery). The wound healing rate after EVT was estimated by the Kaplan-Meier method. The association between final angiographic data and wound healing was assessed employing a Cox proportional hazards model. The overall wound healing rate was 73.5%. The probabilities of wound healing in patients with wound blush obtainment was significantly higher than that of those without wound blush (79.6% vs. 46.5%; p = 0.01). In the multivariate analysis, wound blush obtainment was an independent predictor of wound healing. The presence of wound blush after EVT is significantly associated with wound healing. Wound blush as an angiographic endpoint for EVT may serve as a novel predictor of wound healing in patients with CLI. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  10. Effective time-independent analysis for quantum kicked systems.

    PubMed

    Bandyopadhyay, Jayendra N; Guha Sarkar, Tapomoy

    2015-03-01

    We present a mapping of potentially chaotic time-dependent quantum kicked systems to an equivalent approximate effective time-independent scenario, whereby the system is rendered integrable. The time evolution is factorized into an initial kick, followed by an evolution dictated by a time-independent Hamiltonian and a final kick. This method is applied to the kicked top model. The effective time-independent Hamiltonian thus obtained does not suffer from spurious divergences encountered if the traditional Baker-Cambell-Hausdorff treatment is used. The quasienergy spectrum of the Floquet operator is found to be in excellent agreement with the energy levels of the effective Hamiltonian for a wide range of system parameters. The density of states for the effective system exhibits sharp peaklike features, pointing towards quantum criticality. The dynamics in the classical limit of the integrable effective Hamiltonian shows remarkable agreement with the nonintegrable map corresponding to the actual time-dependent system in the nonchaotic regime. This suggests that the effective Hamiltonian serves as a substitute for the actual system in the nonchaotic regime at both the quantum and classical level.

  11. Effective time-independent analysis for quantum kicked systems

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Jayendra N.; Guha Sarkar, Tapomoy

    2015-03-01

    We present a mapping of potentially chaotic time-dependent quantum kicked systems to an equivalent approximate effective time-independent scenario, whereby the system is rendered integrable. The time evolution is factorized into an initial kick, followed by an evolution dictated by a time-independent Hamiltonian and a final kick. This method is applied to the kicked top model. The effective time-independent Hamiltonian thus obtained does not suffer from spurious divergences encountered if the traditional Baker-Cambell-Hausdorff treatment is used. The quasienergy spectrum of the Floquet operator is found to be in excellent agreement with the energy levels of the effective Hamiltonian for a wide range of system parameters. The density of states for the effective system exhibits sharp peaklike features, pointing towards quantum criticality. The dynamics in the classical limit of the integrable effective Hamiltonian shows remarkable agreement with the nonintegrable map corresponding to the actual time-dependent system in the nonchaotic regime. This suggests that the effective Hamiltonian serves as a substitute for the actual system in the nonchaotic regime at both the quantum and classical level.

  12. Field dependence-independence as related to young women's participation in sports activity.

    PubMed

    Lambrecht, Jeanne L; Cuevas, Jacqueline L

    2007-06-01

    To estimate association between field dependence-independence measured by scores on the Group Embedded Figures Test and young women's participation in sports activity. Participants were 37 undergraduate college women between the ages of 18 and 25 years (M=21). Participants were categorized into two groups, one high in participation in sports activity and one low. A one-tailed independent samples t test yielded no significant difference. Correlations of .36 and .18 were significant but account for little common variance. An ad hoc analysis performed without participants who reported softball activity but who were highly involved in sport activities was significant.

  13. Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.

    PubMed

    Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J

    2018-06-01

    Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.

  14. Comparison of Kalman filter estimates of zenith atmospheric path delays using the global positioning system and very long baseline interferometry

    NASA Technical Reports Server (NTRS)

    Tralli, David M.; Lichten, Stephen M.; Herring, Thomas A.

    1992-01-01

    Kalman filter estimates of zenith nondispersive atmospheric path delays at Westford, Massachusetts, Fort Davis, Texas, and Mojave, California, were obtained from independent analyses of data collected during January and February 1988 using the GPS and VLBI. The apparent accuracy of the path delays is inferred by examining the estimates and covariances from both sets of data. The ability of the geodetic data to resolve zenith path delay fluctuations is determined by comparing further the GPS Kalman filter estimates with corresponding wet path delays derived from water vapor radiometric data available at Mojave over two 8-hour data spans within the comparison period. GPS and VLBI zenith path delay estimates agree well within one standard deviation formal uncertainties (from 10-20 mm for GPS and 3-15 mm for VLBI) in four out of the five possible comparisons, with maximum differences of 5 and 21 mm over 8- to 12-hour data spans.

  15. Similar negative impacts of temperature on global wheat yield estimated by three independent methods

    USDA-ARS?s Scientific Manuscript database

    The potential impact of global temperature change on global wheat production has recently been assessed with different methods, scaling and aggregation approaches. Here we show that grid-based simulations, point-based simulations, and statistical regressions produce similar estimates of temperature ...

  16. Estimating the impact of grouping misclassification on risk ...

    EPA Pesticide Factsheets

    Environmental health risk assessments of chemical mixtures that rely on component approaches often begin by grouping the chemicals of concern according to toxicological similarity. Approaches that assume dose addition typically are used for groups of similarly-acting chemicals and those that assume response addition are used for groups of independently acting chemicals. Grouping criteria for similarity can include a common adverse outcome pathway (AOP) and similarly shaped dose-response curves, with the latter used in the relative potency factor (RPF) method for estimating mixture response. Independence of toxic action is generally assumed if there is evidence that the chemicals act by different mechanisms. Several questions arise about the potential for misclassification error in the mixture risk prediction. If a common AOP has been established, how much error could there be if the same dose-response curve shape is assumed for all chemicals, when the shapes truly differ and, conversely, what is the error potential if different shapes are assumed when they are not? In particular, how do those concerns impact the choice of index chemical and uncertainty of the RPF-estimated mixture response? What is the quantitative impact if dose additivity is assumed when complete or partial independence actually holds and vice versa? These concepts and implications will be presented with numerical examples in the context of uncertainty of the RPF-estimated mixture response,

  17. High Dimensional Classification Using Features Annealed Independence Rules.

    PubMed

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  18. Independent highly sensitive characterization of asparagine deamidation and aspartic acid isomerization by sheathless CZE-ESI-MS/MS.

    PubMed

    Gahoual, Rabah; Beck, Alain; François, Yannis-Nicolas; Leize-Wagner, Emmanuelle

    2016-02-01

    Amino acids residues are commonly submitted to various physicochemical modifications occurring at physiological pH and temperature. Post-translational modifications (PTMs) require comprehensive characterization because of their major influence on protein structure and involvement in numerous in vivo process or signaling. Mass spectrometry (MS) has gradually become an analytical tool of choice to characterize PTMs; however, some modifications are still challenging because of sample faint modification levels or difficulty to separate an intact peptide from modified counterparts before their transfer to the ionization source. Here, we report the implementation of capillary zone electrophoresis coupled to electrospray ionization tandem mass spectrometry (CZE-ESI-MS/MS) by the intermediate of a sheathless interfacing for independent and highly sensitive characterization of asparagine deamidation (deaN) and aspartic acid isomerization (isoD). CZE selectivity regarding deaN and isoD was studied extensively using different sets of synthetic peptides based on actual tryptic peptides. Results demonstrated CZE ability to separate the unmodified peptide from modified homologous exhibiting deaN, isoD or both independently with a resolution systematically superior to 1.29. Developed CZE-ESI-MS/MS method was applied for the characterization of monoclonal antibodies and complex protein mixture. Conserved CZE selectivity could be demonstrated even for complex samples, and foremost results obtained showed that CZE selectivity is similar regardless of the composition of the peptide. Separation of modified peptides prior to the MS analysis allowed to characterize and estimate modification levels of the sample independently for deaN and isoD even for peptides affected by both modifications and, as a consequence, enables to distinguish the formation of l-aspartic acid or d-aspartic acid generated from deaN. Separation based on peptide modification allowed, as supported by the ESI

  19. Measuring the difference in mean willingness to pay when dichotomous choice contingent valuation responses are not independent

    Treesearch

    Gregory L. Poe; Michael P. Welsh; Patricia A. Champ

    1997-01-01

    Dichotomous choice contingent valuation surveys frequently elicit multiple values in a single questionnaire. If individual responses are correlated across scenarios, the standard approach of estimating willingness to pay (WTP) functions independently for each scenario may result in biased estimates of the significance of the difference in mean WTP values. This paper...

  20. Robust discovery of genetic associations incorporating gene-environment interaction and independence.

    PubMed

    Tchetgen Tchetgen, Eric

    2011-03-01

    This article considers the detection and evaluation of genetic effects incorporating gene-environment interaction and independence. Whereas ordinary logistic regression cannot exploit the assumption of gene-environment independence, the proposed approach makes explicit use of the independence assumption to improve estimation efficiency. This method, which uses both cases and controls, fits a constrained retrospective regression in which the genetic variant plays the role of the response variable, and the disease indicator and the environmental exposure are the independent variables. The regression model constrains the association of the environmental exposure with the genetic variant among the controls to be null, thus explicitly encoding the gene-environment independence assumption, which yields substantial gain in accuracy in the evaluation of genetic effects. The proposed retrospective regression approach has several advantages. It is easy to implement with standard software, and it readily accounts for multiple environmental exposures of a polytomous or of a continuous nature, while easily incorporating extraneous covariates. Unlike the profile likelihood approach of Chatterjee and Carroll (Biometrika. 2005;92:399-418), the proposed method does not require a model for the association of a polytomous or continuous exposure with the disease outcome, and, therefore, it is agnostic to the functional form of such a model and completely robust to its possible misspecification.

  1. Systematic Testing of Belief-Propagation Estimates for Absolute Free Energies in Atomistic Peptides and Proteins.

    PubMed

    Donovan-Maiye, Rory M; Langmead, Christopher J; Zuckerman, Daniel M

    2018-01-09

    Motivated by the extremely high computing costs associated with estimates of free energies for biological systems using molecular simulations, we further the exploration of existing "belief propagation" (BP) algorithms for fixed-backbone peptide and protein systems. The precalculation of pairwise interactions among discretized libraries of side-chain conformations, along with representation of protein side chains as nodes in a graphical model, enables direct application of the BP approach, which requires only ∼1 s of single-processor run time after the precalculation stage. We use a "loopy BP" algorithm, which can be seen as an approximate generalization of the transfer-matrix approach to highly connected (i.e., loopy) graphs, and it has previously been applied to protein calculations. We examine the application of loopy BP to several peptides as well as the binding site of the T4 lysozyme L99A mutant. The present study reports on (i) the comparison of the approximate BP results with estimates from unbiased estimators based on the Amber99SB force field; (ii) investigation of the effects of varying library size on BP predictions; and (iii) a theoretical discussion of the discretization effects that can arise in BP calculations. The data suggest that, despite their approximate nature, BP free-energy estimates are highly accurate-indeed, they never fall outside confidence intervals from unbiased estimators for the systems where independent results could be obtained. Furthermore, we find that libraries of sufficiently fine discretization (which diminish library-size sensitivity) can be obtained with standard computing resources in most cases. Altogether, the extremely low computing times and accurate results suggest the BP approach warrants further study.

  2. Statistical independence of the initial conditions in chaotic mixing.

    PubMed

    García de la Cruz, J M; Vassilicos, J C; Rossi, L

    2017-11-01

    Experimental evidence of the scalar convergence towards a global strange eigenmode independent of the scalar initial condition in chaotic mixing is provided. This convergence, underpinning the independent nature of chaotic mixing in any passive scalar, is presented by scalar fields with different initial conditions casting statistically similar shapes when advected by periodic unsteady flows. As the scalar patterns converge towards a global strange eigenmode, the scalar filaments, locally aligned with the direction of maximum stretching, as described by the Lagrangian stretching theory, stack together in an inhomogeneous pattern at distances smaller than their asymptotic minimum widths. The scalar variance decay becomes then exponential and independent of the scalar diffusivity or initial condition. In this work, mixing is achieved by advecting the scalar using a set of laminar flows with unsteady periodic topology. These flows, that resemble the tendril-whorl map, are obtained by morphing the forcing geometry in an electromagnetic free surface 2D mixing experiment. This forcing generates a velocity field which periodically switches between two concentric hyperbolic and elliptic stagnation points. In agreement with previous literature, the velocity fields obtained produce a chaotic mixer with two regions: a central mixing and an external extensional area. These two regions are interconnected through two pairs of fluid conduits which transfer clean and dyed fluid from the extensional area towards the mixing region and a homogenized mixture from the mixing area towards the extensional region.

  3. iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees

    PubMed Central

    Rosenberg, Noah A.

    2012-01-01

    Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756

  4. Validity and reliability of central blood pressure estimated by upper arm oscillometric cuff pressure.

    PubMed

    Climie, Rachel E D; Schultz, Martin G; Nikolic, Sonja B; Ahuja, Kiran D K; Fell, James W; Sharman, James E

    2012-04-01

    Noninvasive central blood pressure (BP) independently predicts mortality, but current methods are operator-dependent, requiring skill to obtain quality recordings. The aims of this study were first, to determine the validity of an automatic, upper arm oscillometric cuff method for estimating central BP (O(CBP)) by comparison with the noninvasive reference standard of radial tonometry (T(CBP)). Second, we determined the intratest and intertest reliability of O(CBP). To assess validity, central BP was estimated by O(CBP) (Pulsecor R6.5B monitor) and compared with T(CBP) (SphygmoCor) in 47 participants free from cardiovascular disease (aged 57 ± 9 years) in supine, seated, and standing positions. Brachial mean arterial pressure (MAP) and diastolic BP (DBP) from the O(CBP) device were used to calibrate in both devices. Duplicate measures were recorded in each position on the same day to assess intratest reliability, and participants returned within 10 ± 7 days for repeat measurements to assess intertest reliability. There was a strong intraclass correlation (ICC = 0.987, P < 0.001) and small mean difference (1.2 ± 2.2 mm Hg) for central systolic BP (SBP) determined by O(CBP) compared with T(CBP). Ninety-six percent of all comparisons (n = 495 acceptable recordings) were within 5 mm Hg. With respect to reliability, there were strong correlations but higher limits of agreement for the intratest (ICC = 0.975, P < 0.001, mean difference 0.6 ± 4.5 mm Hg) and intertest (ICC = 0.895, P < 0.001, mean difference 4.3 ± 8.0 mm Hg) comparisons. Estimation of central SBP using cuff oscillometry is comparable to radial tonometry and has good reproducibility. As a noninvasive, relatively operator-independent method, O(CBP) may be as useful as T(CBP) for estimating central BP in clinical practice.

  5. Independent data validation of an in vitro method for ...

    EPA Pesticide Factsheets

    In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide

  6. Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria

    NASA Astrophysics Data System (ADS)

    Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.

    2009-04-01

    Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and

  7. Estimation of the genome sizes of the chigger mites Leptotrombidium pallidum and Leptotrombidium scutellare based on quantitative PCR and k-mer analysis

    PubMed Central

    2014-01-01

    Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244

  8. A multimodal detection model of dolphins to estimate abundance validated by field experiments.

    PubMed

    Akamatsu, Tomonari; Ura, Tamaki; Sugimatsu, Harumi; Bahl, Rajendar; Behera, Sandeep; Panda, Sudarsan; Khan, Muntaz; Kar, S K; Kar, C S; Kimura, Satoko; Sasaki-Yamamoto, Yukiko

    2013-09-01

    Abundance estimation of marine mammals requires matching of detection of an animal or a group of animal by two independent means. A multimodal detection model using visual and acoustic cues (surfacing and phonation) that enables abundance estimation of dolphins is proposed. The method does not require a specific time window to match the cues of both means for applying mark-recapture method. The proposed model was evaluated using data obtained in field observations of Ganges River dolphins and Irrawaddy dolphins, as examples of dispersed and condensed distributions of animals, respectively. The acoustic detection probability was approximately 80%, 20% higher than that of visual detection for both species, regardless of the distribution of the animals in present study sites. The abundance estimates of Ganges River dolphins and Irrawaddy dolphins fairly agreed with the numbers reported in previous monitoring studies. The single animal detection probability was smaller than that of larger cluster size, as predicted by the model and confirmed by field data. However, dense groups of Irrawaddy dolphins showed difference in cluster sizes observed by visual and acoustic methods. Lower detection probability of single clusters of this species seemed to be caused by the clumped distribution of this species.

  9. Independent Factors for Prediction of Poor Outcomes in Patients with Febrile Neutropenia

    PubMed Central

    Günalp, Müge; Koyunoğlu, Merve; Gürler, Serdar; Koca, Ayça; Yeşilkaya, İlker; Öner, Emre; Akkaş, Meltem; Aksu, Nalan Metin; Demirkan, Arda; Polat, Onur; Elhan, Atilla Halil

    2014-01-01

    Background Febrile neutropenia (FN) is a life-threatening condition that requires urgent management in the emergency department (ED). Recent progress in the treatment of neutropenic fever has underscored the importance of risk stratification. In this study, we aimed to determine independent factors for prediction of poor outcomes in patients with FN. Material/Methods We retrospectively evaluated 200 chemotherapy-induced febrile neutropenic patients who visited the ED. Upon arrival at the ED, clinical data, including sex, age, vital signs, underlying systemic diseases, laboratory test results, estimated GFR, blood cultures, CRP, radiologic examinations, and Multinational Association of Supportive Care in Cancer (MASCC) score of all febrile neutropenic patients were obtained. Outcomes were categorized as “poor” if serious complications during hospitalization, including death, occurred. Results The platelet count <50 000 cells/mm3 (OR 3.90, 95% CI 1.62–9.43), pulmonary infiltration (OR 3.45, 95% CI 1.48–8.07), hypoproteinemia <6 g/dl (OR 3.30, 95% CI 1.27–8.56), respiratory rate >24/min (OR 8.75, 95% CI 2.18–35.13), and MASCC score <21 (OR 9.20, 95% CI 3.98–21.26) were determined as independent risk factors for the prediction of death. The platelet count <50 000 cells/mm3 (OR 3.93, 95% CI 1.42–10.92), serum CRP >50 mg/dl (OR 3.80, 95% CI 1.68–8.61), hypoproteinemia (OR 7.81, 95% CI 3.43–17.78), eGFR ≤90 ML/min/1.73 m2 (OR 3.06, 95% CI 1.13–8.26), and MASCC score <21 (OR 3.45, 95% CI 1.53–7.79) were determined as independent risk factors for the prediction of poor clinical outcomes of FN patients. Platelet count, protein level, respiratory rate, pulmonary infiltration, CRP, MASCC score, and eGFR were shown to have a significant association with outcome. Conclusions The results of our study may help emergency medicine physicians to prevent serious complications with proper use of simple independent risk factors besides MASCC score. PMID

  10. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  11. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  12. Statistical inference for remote sensing-based estimates of net deforestation

    Treesearch

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  13. Estimation of cardiac motion in cine-MRI sequences by correlation transform optical flow of monogenic features distance

    NASA Astrophysics Data System (ADS)

    Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick

    2016-12-01

    Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.

  14. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  15. Independent technical review and analysis of hydraulic modeling and hydrology under low-flow conditions of the Des Plaines River near Riverside, Illinois

    USGS Publications Warehouse

    Over, Thomas M.; Straub, Timothy D.; Hortness, Jon E.; Murphy, Elizabeth A.

    2012-01-01

    The U.S. Geological Survey (USGS) has operated a streamgage and published daily flows for the Des Plaines River at Riverside since Oct. 1, 1943. A HEC-RAS model has been developed to estimate the effect of the removal of Hofmann Dam near the gage on low-flow elevations in the reach approximately 3 miles upstream from the dam. The Village of Riverside, the Illinois Department of Natural Resources-Office of Water Resources (IDNR-OWR), and the U. S. Army Corps of Engineers-Chicago District (USACE-Chicago) are interested in verifying the performance of the HEC-RAS model for specific low-flow conditions, and obtaining an estimate of selected daily flow quantiles and other low-flow statistics for a selected period of record that best represents current hydrologic conditions. Because the USGS publishes streamflow records for the Des Plaines River system and provides unbiased analyses of flows and stream hydraulic characteristics, the USGS served as an Independent Technical Reviewer (ITR) for this study.

  16. DNA-based culture-independent analysis detects the presence of group a streptococcus in throat samples from healthy adults in Japan.

    PubMed

    Kulkarni, Tejaswini; Aikawa, Chihiro; Nozawa, Takashi; Murase, Kazunori; Maruyama, Fumito; Nakagawa, Ichiro

    2016-10-11

    Group A Streptococcus (GAS; Streptococcus pyogenes) causes a range of mild to severe infections in humans. It can also colonize healthy persons asymptomatically. Therefore, it is important to study GAS carriage in healthy populations, as carriage of it might lead to subsequent disease manifestation, clonal spread in the community, and/or diversification of the organism. Throat swab culture is the gold standard method for GAS detection. Advanced culture-independent methods provide rapid and efficient detection of microorganisms directly from clinical samples. We investigated the presence of GAS in throat swab samples from healthy adults in Japan using culture-dependent and culture-independent methods. Two throat swab samples were collected from 148 healthy volunteers. One was cultured on selective medium, while total DNA extracted from the other was polymerase chain reaction (PCR) amplified with two GAS-specific primer pairs: one was a newly designed 16S rRNA-specific primer pair, the other a previously described V-Na + -ATPase primer pair. Although only 5 (3.4 %) of the 148 samples were GAS-positive by the culture-dependent method, 146 (98.6 %) were positive for the presence of GAS DNA by the culture-independent method. To obtain serotype information by emm typing, we performed nested PCR using newly designed emm primers. We detected the four different emm types in 25 (16.9 %) samples, and these differed from the common emm types associated with GAS associated diseases in Japan. The different emm types detected in the healthy volunteers indicate that the presence of unique emm types might be associated with GAS carriage. Our results suggest that culture-independent methods should be considered for profiling GAS in the healthy hosts, with a view to obtaining better understanding of these organisms. The GAS-specific primers (16S rRNA and V-Na + -ATPase) used in this study can be used to estimate the maximum potential GAS carriage in people.

  17. An Evaluation of Residual Feed Intake Estimates Obtained with Computer Models Versus Empirical Regression

    USDA-ARS?s Scientific Manuscript database

    Data on individual daily feed intake, bi-weekly BW, and carcass composition were obtained on 1,212 crossbred steers, in Cycle VII of the Germplasm Evaluation Project at the U.S. Meat Animal Research Center. Within animal regressions of cumulative feed intake and BW on linear and quadratic days on fe...

  18. REDSHIFT-INDEPENDENT DISTANCES IN THE NASA/IPAC EXTRAGALACTIC DATABASE: METHODOLOGY, CONTENT, AND USE OF NED-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steer, Ian; Madore, Barry F.; Mazzarella, Joseph M.

    Estimates of galaxy distances based on indicators that are independent of cosmological redshift are fundamental to astrophysics. Researchers use them to establish the extragalactic distance scale, to underpin estimates of the Hubble constant, and to study peculiar velocities induced by gravitational attractions that perturb the motions of galaxies with respect to the “Hubble flow” of universal expansion. In 2006 the NASA/IPAC Extragalactic Database (NED) began making available a comprehensive compilation of redshift-independent extragalactic distance estimates. A decade later, this compendium of distances (NED-D) now contains more than 100,000 individual estimates based on primary and secondary indicators, available for more thanmore » 28,000 galaxies, and compiled from over 2000 references in the refereed astronomical literature. This paper describes the methodology, content, and use of NED-D, and addresses challenges to be overcome in compiling such distances. Currently, 75 different distance indicators are in use. We include a figure that facilitates comparison of the indicators with significant numbers of estimates in terms of the minimum, 25th percentile, median, 75th percentile, and maximum distances spanned. Brief descriptions of the indicators, including examples of their use in the database, are given in an appendix.« less

  19. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  20. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    PubMed

    Gupta, Manan; Joshi, Amitabh; Vidya, T N C

    2017-01-01

    Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the

  1. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators

    PubMed Central

    Joshi, Amitabh; Vidya, T. N. C.

    2017-01-01

    Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the

  2. Estimating bacterial production in marine waters from the simultaneous incorporation of thymidine and leucine.

    PubMed

    Chin-Leo, G; Kirchman, D L

    1988-08-01

    We examined the simultaneous incorporation of [H]thymidine and [C]leucine to obtain two independent indices of bacterial production (DNA and protein syntheses) in a single incubation. Incorporation rates of leucine estimated by the dual-label method were generally higher than those obtained by the single-label method, but the differences were small (dual/single = 1.1 +/- 0.2 [mean +/- standard deviation]) and were probably due to the presence of labeled leucyl-tRNA in the cold trichloroacetic acid-insoluble fraction. There were no significant differences in thymidine incorporation between dual- and single-label incubations (dual/ single = 1.03 +/- 0.13). Addition of the two substrates in relatively large amounts (25 nM) did not apparently increase bacterial activity during short incubations (<5 h). With the dual-label method we found that thymidine and leucine incorporation rates covaried over depth profiles of the Chesapeake Bay. Estimates of bacterial production based on thymidine and leucine differed by less than 25%. Although the need for appropriate conversion factors has not been eliminated, the dual-label approach can be used to examine the variation in bacterial production while ensuring that the observed variation in incorporation rates is due to real changes in bacterial production rather than changes in conversion factors or introduction of other artifacts.

  3. Limitations in Life Participation and Independence Due to Secondary Conditions

    ERIC Educational Resources Information Center

    Koritsas, Stella; Iacono, Teresa

    2009-01-01

    The effects of secondary conditions across adults with autism, Down syndrome, and cerebral palsy were explored in terms of overall limitation in life participation and independence, changes over time, and the degree and nature of limitation in specific secondary conditions. Information was obtained for 35 adults with autism, 49 with Down syndrome,…

  4. Age-Dependent and Age-Independent Measures of Locus of Control.

    ERIC Educational Resources Information Center

    Sherman, Lawrence W.; Hofmann, Richard

    Using a longitudinal data set obtained from 169 pre-adolescent children between the ages of 8 and 13 years, this study statistically divided locus of control into two independent components. The first component was noted as "age-dependent" (AD) and was determined by predicted values generated by regressing children's ages onto their…

  5. Estimating cell populations

    NASA Technical Reports Server (NTRS)

    White, B. S.; Castleman, K. R.

    1981-01-01

    An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.

  6. Spring Small Grains Area Estimation

    NASA Technical Reports Server (NTRS)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  7. Ice Cloud Optical Thickness and Extinction Estimates from Radar Measurements.

    NASA Astrophysics Data System (ADS)

    Matrosov, Sergey Y.; Shupe, Matthew D.; Heymsfield, Andrew J.; Zuidema, Paquita

    2003-11-01

    A remote sensing method is proposed to derive vertical profiles of the visible extinction coefficients in ice clouds from measurements of the radar reflectivity and Doppler velocity taken by a vertically pointing 35-GHz cloud radar. The extinction coefficient and its vertical integral, optical thickness τ, are among the fundamental cloud optical parameters that, to a large extent, determine the radiative impact of clouds. The results obtained with this method could be used as input for different climate and radiation models and for comparisons with parameterizations that relate cloud microphysical parameters and optical properties. An important advantage of the proposed method is its potential applicability to multicloud situations and mixed-phase conditions. In the latter case, it might be able to provide the information on the ice component of mixed-phase clouds if the radar moments are dominated by this component. The uncertainties of radar-based retrievals of cloud visible optical thickness are estimated by comparing retrieval results with optical thicknesses obtained independently from radiometric measurements during the yearlong Surface Heat Budget of the Arctic Ocean (SHEBA) field experiment. The radiometric measurements provide a robust way to estimate τ but are applicable only to optically thin ice clouds without intervening liquid layers. The comparisons of cloud optical thicknesses retrieved from radar and from radiometer measurements indicate an uncertainty of about 77% and a bias of about -14% in the radar estimates of τ relative to radiometric retrievals. One possible explanation of the negative bias is an inherently low sensitivity of radar measurements to smaller cloud particles that still contribute noticeably to the cloud extinction. This estimate of the uncertainty is in line with simple theoretical considerations, and the associated retrieval accuracy should be considered good for a nonoptical instrument, such as radar. This paper also

  8. Impact imaging of aircraft composite structure based on a model-independent spatial-wavenumber filter.

    PubMed

    Qiu, Lei; Liu, Bin; Yuan, Shenfang; Su, Zhongqing

    2016-01-01

    The spatial-wavenumber filtering technique is an effective approach to distinguish the propagating direction and wave mode of Lamb wave in spatial-wavenumber domain. Therefore, it has been gradually studied for damage evaluation in recent years. But for on-line impact monitoring in practical application, the main problem is how to realize the spatial-wavenumber filtering of impact signal when the wavenumber of high spatial resolution cannot be measured or the accurate wavenumber curve cannot be modeled. In this paper, a new model-independent spatial-wavenumber filter based impact imaging method is proposed. In this method, a 2D cross-shaped array constructed by two linear piezoelectric (PZT) sensor arrays is used to acquire impact signal on-line. The continuous complex Shannon wavelet transform is adopted to extract the frequency narrowband signals from the frequency wideband impact response signals of the PZT sensors. A model-independent spatial-wavenumber filter is designed based on the spatial-wavenumber filtering technique. Based on the designed filter, a wavenumber searching and best match mechanism is proposed to implement the spatial-wavenumber filtering of the frequency narrowband signals without modeling, which can be used to obtain a wavenumber-time image of the impact relative to a linear PZT sensor array. By using the two wavenumber-time images of the 2D cross-shaped array, the impact direction can be estimated without blind angle. The impact distance relative to the 2D cross-shaped array can be calculated by using the difference of time-of-flight between the frequency narrowband signals of two different central frequencies and the corresponding group velocities. The validations performed on a carbon fiber composite laminate plate and an aircraft composite oil tank show a good impact localization accuracy of the model-independent spatial-wavenumber filter based impact imaging method. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  10. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    PubMed

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  11. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    USGS Publications Warehouse

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error

  12. Micrometer size polarization independent depletion-type photonic modulator in Silicon On Insulator

    NASA Astrophysics Data System (ADS)

    Gardes, F. Y.; Tsakmakidis, K. L.; Thomson, D.; Reed, G. T.; Mashanovich, G. Z.; Hess, O.; Avitabile, D.

    2007-04-01

    The trend in silicon photonics, in the last few years has been to reduce waveguide size to obtain maximum gain in the real estate of devices as well as to increase the performance of active devices. Using different methods for the modulation, optical modulators in silicon have seen their bandwidth increased to reach multi GHz frequencies. In order to simplify fabrication, one requirement for a waveguide, as well as for a modulator, is to retain polarisation independence in any state of operation and to be as small as possible. In this paper we provide a way to obtain polarization independence and improve the efficiency of an optical modulator using a V-shaped pn junction base on the natural etch angle of silicon, 54.7 deg. This modulator is compared to a flat junction depletion type modulator of the same size and doping concentration.

  13. Functional Independent Scaling Relation for ORR/OER Catalysts

    DOE PAGES

    Christensen, Rune; Hansen, Heine A.; Dickens, Colin F.; ...

    2016-10-11

    A widely used adsorption energy scaling relation between OH* and OOH* intermediates in the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), has previously been determined using density functional theory and shown to dictate a minimum thermodynamic overpotential for both reactions. Here, we show that the oxygen–oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largelymore » cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange–correlation functional, is obtained and found to differ by 0.1 eV from the original. Lastly, this largely confirms that, although obtained with a method suffering from systematic errors, the previously obtained scaling relation is applicable for predictions of catalytic activity.« less

  14. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  15. Estimating surface hardening profile of blank for obtaining high drawing ratio in deep drawing process using FE analysis

    NASA Astrophysics Data System (ADS)

    Tan, C. J.; Aslian, A.; Honarvar, B.; Puborlaksono, J.; Yau, Y. H.; Chong, W. T.

    2015-12-01

    We constructed an FE axisymmetric model to simulate the effect of partially hardened blanks on increasing the limiting drawing ratio (LDR) of cylindrical cups. We partitioned an arc-shaped hard layer into the cross section of a DP590 blank. We assumed the mechanical property of the layer is equivalent to either DP980 or DP780. We verified the accuracy of the model by comparing the calculated LDR for DP590 with the one reported in the literature. The LDR for the partially hardened blank increased from 2.11 to 2.50 with a 1 mm depth of DP980 ring-shaped hard layer on the top surface of the blank. The position of the layer changed with drawing ratios. We proposed equations for estimating the inner and outer diameters of the layer, and tested its accuracy in the simulation. Although the outer diameters fitted in well with the estimated line, the inner diameters are slightly less than the estimated ones.

  16. 20 CFR 404.810 - How to obtain a statement of earnings and a benefit estimate statement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records at the time of the request. If you have a social security number and have wages or net earnings... prescribed form, giving us your name, social security number, date of birth, and sex. You, your authorized... benefit estimate statement. 404.810 Section 404.810 Employees' Benefits SOCIAL SECURITY ADMINISTRATION...

  17. Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.

    PubMed

    Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo

    2017-09-01

    The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.

  18. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and

  19. Independence of heritable influences on the food intake of free-living humans.

    PubMed

    de Castro, John M

    2002-01-01

    The time of day of meal ingestion, the number of people present at the meal, the subjective state of hunger, and the estimated before-meal contents in the stomach have been established as influences on the amount eaten in a meal and these influences have been shown to be heritable. Because these factors intercorrelate, the calculated heritabilities for some of these variables might result indirectly from their covariation with one of the other heritable variables. The independence of the heritability of the influence of these four factors was investigated with 110 identical and 102 fraternal same-sex and 53 fraternal mixed-sex adult twin pairs who were paid to maintain 7-d food-intake diaries. From the diary reports, the meal sizes were calculated and subjected to multiple regression analysis using the estimated before-meal stomach contents, the reported number of other people present, the subjective hunger ratings, and the time of day of the meal as predictors. Linear structural modeling was applied to the beta-coefficients from the multiple regression to investigate whether the heritability of the influences of these four variables was independent. Significant genetic effects were found for the beta-coefficients for all four variables, indicating that the heritability of their relationship with intake is to some extent independent and heritable. This suggests that influences of multiple factors on intake are influenced by the genes and become part of the total package of genetically determined physiologic, sociocultural, and psychological processes that regulate energy balance.

  20. Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.

    PubMed

    Pai, Manjunath P; Hong, Joseph; Krop, Lynne

    2017-04-01

    Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.

  1. Using Independent Components Analysis to diminish the response of groundwater in borehole strainmeter

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Yen; Hu, Jyr-Ching

    2017-04-01

    With designed feather, borehole strainmeter can not only record minor signals of tectonic movements, but also broad environmental signs such as barometry, rainfall and groundwater. Among these external factor, groundwater will influence the observation of borehole strainmeter mostly. According to essential observation, groundwater will cause much bigger response than the target tectonic strain change. We use co-sited piezometer to record pore pressure of groundwater in the rock formation in order to obtain the relationship of stain change and pore pressure. But there still exist some puzzle that can not be solved. First, due to instrument limitation, we could not set the pore pressure transducer in the same aquifer as strainmeter did. In this case, the response due to pore pressure change might be not fully correct. Furthermore, through pore-pressure transducers were set in most observatory, problem of electricity and connectivity will cause the record lack and lost. Therefore, it is necessary to find out a better and more stable method to diminish the groundwater response of strainmeter data.Strain transducer with different orientation can observe the groundwater response in different scale. If we can extract out groundwater signal from each independent strain transducer and estimate its original source. That will significantly rise signal strength and lower noise level. The case belongs some kind of blind-signal-separation (BSS) problem. The procedure of BSS extract or rebuild signal that can't be observed directly in many mixed sources and Independent-Component-Analysis (ICA) is one method adopted broadly. ICA is an analysis to find out parts which have statistics independence and non-Gaussian factor in complex signals. We use FastICA developed by to figure out the groundwater response strain in original strain data, and try to diminish it to rise the signal strength. We preceded strain data previously, then using ICA to separate data into serval independent

  2. Quantitative orientation-independent differential interference contrast (DIC) microscopy

    NASA Astrophysics Data System (ADS)

    Shribak, Michael; LaFountain, James; Biggs, David; Inoué, Shinya

    2007-02-01

    We describe a new DIC technique, which records phase gradients within microscopic specimens independently of their orientation. The proposed system allows the generation of images representing the distribution of dry mass (optical path difference) in the specimen. Unlike in other forms of interference microscopes, this approach does not require a narrow illuminating cone. The orientation-independent differential interference contrast (OI-DIC) system can also be combined with orientation-independent polarization (OI-Pol) measurements to yield two complementary images: one showing dry mass distribution (which is proportional to refractive index) and the other showing distribution of birefringence (due to structural or internal anisotropy). With a model specimen used for this work -- living spermatocytes from the crane fly, Nephrotoma suturalis --- the OI-DIC image clearly reveals the detailed shape of the chromosomes while the polarization image quantitatively depicts the distribution of the birefringent microtubules in the spindle, both without any need for staining or other modifications of the cell. We present examples of a pseudo-color combined image incorporating both orientation-independent DIC and polarization images of a spermatocyte at diakinesis and metaphase of meiosis I. Those images provide clear evidence that the proposed technique can reveal fine architecture and molecular organization in live cells without perturbation associated with staining or fluorescent labeling. The phase image was obtained using optics having a numerical aperture 1.4, thus achieving a level of resolution never before achieved with any interference microscope.

  3. Structural, electronic, elastic, and thermal properties of CaNiH3 perovskite obtained from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Benlamari, S.; Bendjeddou, H.; Boulechfar, R.; Amara Korba, S.; Meradji, H.; Ahmed, R.; Ghemid, S.; Khenata, R.; Omran, S. Bin

    2018-03-01

    A theoretical study of the structural, elastic, electronic, mechanical, and thermal properties of the perovskite-type hydride CaNiH3 is presented. This study is carried out via first-principles full potential (FP) linearized augmented plane wave plus local orbital (LAPW+lo) method designed within the density functional theory (DFT). To treat the exchange–correlation energy/potential for the total energy calculations, the local density approximation (LDA) of Perdew–Wang (PW) and the generalized gradient approximation (GGA) of Perdew–Burke–Ernzerhof (PBE) are used. The three independent elastic constants (C 11, C 12, and C 44) are calculated from the direct computation of the stresses generated by small strains. Besides, we report the variation of the elastic constants as a function of pressure as well. From the calculated elastic constants, the mechanical character of CaNiH3 is predicted. Pertaining to the thermal properties, the Debye temperature is estimated from the average sound velocity. To further comprehend this compound, the quasi-harmonic Debye model is used to analyze the thermal properties. From the calculations, we find that the obtained results of the lattice constant (a 0), bulk modulus (B 0), and its pressure derivative ({B}0^{\\prime }) are in good agreement with the available theoretical as well as experimental results. Similarly, the obtained electronic band structure demonstrates the metallic character of this perovskite-type hydride.

  4. Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression.

    PubMed

    Zhen, Xiantong; Zhang, Heye; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2017-02-01

    Cardiac four-chamber volume estimation serves as a fundamental and crucial role in clinical quantitative analysis of whole heart functions. It is a challenging task due to the huge complexity of the four chambers including great appearance variations, huge shape deformation and interference between chambers. Direct estimation has recently emerged as an effective and convenient tool for cardiac ventricular volume estimation. However, existing direct estimation methods were specifically developed for one single ventricle, i.e., left ventricle (LV), or bi-ventricles; they can not be directly used for four chamber volume estimation due to the great combinatorial variability and highly complex anatomical interdependency of the four chambers. In this paper, we propose a new, general framework for direct and simultaneous four chamber volume estimation. We have addressed two key issues, i.e., cardiac image representation and simultaneous four chamber volume estimation, which enables accurate and efficient four-chamber volume estimation. We generate compact and discriminative image representations by supervised descriptor learning (SDL) which can remove irrelevant information and extract discriminative features. We propose direct and simultaneous four-chamber volume estimation by the multioutput sparse latent regression (MSLR), which enables jointly modeling nonlinear input-output relationships and capturing four-chamber interdependence. The proposed method is highly generalized, independent of imaging modalities, which provides a general regression framework that can be extensively used for clinical data prediction to achieve automated diagnosis. Experiments on both MR and CT images show that our method achieves high performance with a correlation coefficient of up to 0.921 with ground truth obtained manually by human experts, which is clinically significant and enables more accurate, convenient and comprehensive assessment of cardiac functions. Copyright © 2016 Elsevier

  5. Evaluation of sampling methods used to estimate irrigation pumpage in Chase, Dundy, and Perkins counties, Nebraska

    USGS Publications Warehouse

    Heimes, F.J.; Luckey, R.R.; Stephens, D.M.

    1986-01-01

    Combining estimates of applied irrigation water, determined for selected sample sites, with information on irrigated acreage provides one alternative for developing areal estimates of groundwater pumpage for irrigation. The reliability of this approach was evaluated by comparing estimated pumpage with metered pumpage for two years for a three-county area in southwestern Nebraska. Meters on all irrigation wells in the three counties provided a complete data set for evaluation of equipment and comparison with pumpage estimates. Regression analyses were conducted on discharge, time-of-operation, and pumpage data collected at 52 irrigation sites in 1983 and at 57 irrigation sites in 1984 using data from inline flowmeters as the independent variable. The standard error of the estimate for regression analysis of discharge measurements made using a portable flowmeter was 6.8% of the mean discharge metered by inline flowmeters. The standard error of the estimate for regression analysis of time of operation determined from electric meters was 8.1% of the mean time of operation determined from in-line and 15.1% for engine-hour meters. Sampled pumpage, calculated by multiplying the average discharge obtained from the portable flowmeter by the time of operation obtained from energy or hour meters, was compared with metered pumpage from in-line flowmeters at sample sites. The standard error of the estimate for the regression analysis of sampled pumpage was 10.3% of the mean of the metered pumpage for 1983 and 1984 combined. The difference in the mean of the sampled pumpage and the mean of the metered pumpage was only 1.8% for 1983 and 2.3% for 1984. Estimated pumpage, for each county and for the study area, was calculated by multiplying application (sampled pumpage divided by irrigated acreages at sample sites) by irrigated acreage compiled from Landsat (Land satellite) imagery. Estimated pumpage was compared with total metered pumpage for each county and the study area

  6. Ethnicity is an independent risk indicator when estimating diabetes risk with FINDRISC scores: a cross sectional study comparing immigrants from the Middle East and native Swedes.

    PubMed

    Bennet, L; Groop, L; Lindblad, U; Agardh, C D; Franks, P W

    2014-10-01

    This study sought to compare type 2 diabetes (T2D) risk indicators in Iraqi immigrants with those in ethnic Swedes living in southern Sweden. Population-based, cross-sectional cohort study of men and women, aged 30-75 years, born in Iraq or Sweden conducted in 2010-2012 in Malmö, Sweden. A 75g oral glucose tolerance test was performed and sociodemographic and lifestyle data were collected. T2D risk was assessed by the Finnish Diabetes Risk Score (FINDRISC). In Iraqi versus Swedish participants, T2D was twice as prevalent (11.6 vs. 5.8%, p<0.001). A large proportion of the excess T2D risk was attributable to larger waist circumference and first-degree family history of diabetes. However, Iraqi ethnicity was a risk factor for T2D independently of other FINDRISC factors (odds ratio (OR) 2.5, 95% CI 1.6-3.9). The FINDRISC algorithm predicted that more Iraqis than Swedes (16.2 vs. 12.3%, p<0.001) will develop T2D within the next decade. The total annual costs for excess T2D risk in Iraqis are estimated to exceed 2.3 million euros in 2005, not accounting for worse quality of life. Our study suggests that Middle Eastern ethnicity should be considered an independent risk indicator for diabetes. Accordingly, the implementation of culturally tailored prevention programs may be warranted. Copyright © 2014 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  7. [Estimating heavy metal concentrations in topsoil from vegetation reflectance spectra of Hyperion images: A case study of Yushu County, Qinghai, China.

    PubMed

    Yang, Ling Yu; Gao, Xiao Hong; Zhang, Wei; Shi, Fei Fei; He, Lin Hua; Jia, Wei

    2016-06-01

    In this study, we explored the feasibility of estimating the soil heavy metal concentrations using the hyperspectral satellite image. The concentration of As, Pb, Zn and Cd elements in 48 topsoil samples collected from the field in Yushu County of the Sanjiangyuan regions was measured in the laboratory. We then extracted 176 vegetation spectral reflectance bands of 48 soil samples as well as five vegetation indices from two Hyperion images. Following that, the partial least squares regression (PLSR) method was employed to estimate the soil heavy metal concentrations using the above two independent sets of Hyperion-derived variables, separately constructed the estimation model between the 176 vegetation spectral reflectance bands and the soil heavy metal concentrations (called the vegetation spectral reflectance-based estimation model), and between the five vegetation indices being used as the independent variable and the soil heavy metal concentrations (called synthetic vegetation index-based estimation model). Using RPD (the ratio of standard deviation from the 4 heavy metals measured values of the validation samples to RMSE) as the validation criteria, the RPDs of As and Pb concentrations from the two models were both less than 1.4, which suggested that both models were incapable of roughly estimating As and Pb concentrations; whereas the RPDs of Zn and Cd were 1.53, 1.46 and 1.46, 1.42, respectively, which implied that both models had the ability for rough estimation of Zn and Cd concentrations. Based on those results, the vegetation spectral-based estimation model was selected to obtain the spatial distribution map of Zn concentration in combination with the Hyperion image. The estimated Zn map showed that the zones with high Zn concentrations were distributed near the provincial road 308, national road 214 and towns, which could be influenced by human activities. Our study proved that the spectral reflectance of Hyperion image was useful in estimating the soil

  8. Improving the Discipline of Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Piland, William M.; Pine, David J.; Wilson, Delano M.

    2000-01-01

    The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning man), program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility.

  9. Hierarchical Bayes estimation of species richness and occupancy in spatially replicated surveys

    USGS Publications Warehouse

    Kery, M.; Royle, J. Andrew

    2008-01-01

    1. Species richness is the most widely used biodiversity metric, but cannot be observed directly as, typically, some species are overlooked. Imperfect detectability must therefore be accounted for to obtain unbiased species-richness estimates. When richness is assessed at multiple sites, two approaches can be used to estimate species richness: either estimating for each site separately, or pooling all samples. The first approach produces imprecise estimates, while the second loses site-specific information. 2. In contrast, a hierarchical Bayes (HB) multispecies site-occupancy model benefits from the combination of information across sites without losing site-specific information and also yields occupancy estimates for each species. The heart of the model is an estimate of the incompletely observed presence-absence matrix, a centrepiece of biogeography and monitoring studies. We illustrate the model using Swiss breeding bird survey data, and compare its estimates with the widely used jackknife species-richness estimator and raw species counts. 3. Two independent observers each conducted three surveys in 26 1-km(2) quadrats, and detected 27-56 (total 103) species. The average estimated proportion of species detected after three surveys was 0.87 under the HB model. Jackknife estimates were less precise (less repeatable between observers) than raw counts, but HB estimates were as repeatable as raw counts. The combination of information in the HB model thus resulted in species-richness estimates presumably at least as unbiased as previous approaches that correct for detectability, but without costs in precision relative to uncorrected, biased species counts. 4. Total species richness in the entire region sampled was estimated at 113.1 (CI 106-123); species detectability ranged from 0.08 to 0.99, illustrating very heterogeneous species detectability; and species occupancy was 0.06-0.96. Even after six surveys, absolute bias in observed occupancy was estimated at up to 0

  10. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  11. Affirming Independence: Exploring Mechanisms Underlying a Values Affirmation Intervention for First-Generation Students

    PubMed Central

    Tibbetts, Yoi; Harackiewicz, Judith M.; Canning, Elizabeth A.; Boston, Jilana S.; Priniski, Stacy J.; Hyde, Janet S.

    2016-01-01

    First-generation college students (students for whom neither parent has a 4-year college degree) earn lower grades and worry more about whether they belong in college, compared to continuing-generation students (who have at least one parent with a 4-year college degree). We conducted a longitudinal follow-up of participants from a study in which a values-affirmation intervention improved performance in a biology course for first-generation college students, and found that the treatment effect on grades persisted three years later. First-generation students in the treatment condition obtained a GPA that was, on average, .18 points higher than first-generation students in the control condition, three years after values affirmation was implemented (Study 1A). We explored mechanisms by testing if the values-affirmation effects were predicated on first-generation students reflecting on interdependent values (thus affirming their values that are consistent with working-class culture) or independent values (thus affirming their values that are consistent with the culture of higher education). We found that when first-generation students wrote about their independence, they obtained higher grades (both in the semester in which values affirmation was implemented and in subsequent semesters) and felt less concerned about their background. In a separate laboratory experiment (Study 2) we manipulated the extent to which participants wrote about independence and found that encouraging first-generation students to write more about their independence improved their performance on a math test. These studies highlight the potential of having FG students focus on their own independence. PMID:27176770

  12. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.

  13. Multiscale estimation of excess mass from gravity data

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  14. Satellite-derived methane hotspot emission estimates using a fast data-driven method

    NASA Astrophysics Data System (ADS)

    Buchwitz, Michael; Schneising, Oliver; Reuter, Maximilian; Heymann, Jens; Krautwurst, Sven; Bovensmann, Heinrich; Burrows, John P.; Boesch, Hartmut; Parker, Robert J.; Somkuti, Peter; Detmers, Rob G.; Hasekamp, Otto P.; Aben, Ilse; Butz, André; Frankenberg, Christian; Turner, Alexander J.

    2017-05-01

    Methane is an important atmospheric greenhouse gas and an adequate understanding of its emission sources is needed for climate change assessments, predictions, and the development and verification of emission mitigation strategies. Satellite retrievals of near-surface-sensitive column-averaged dry-air mole fractions of atmospheric methane, i.e. XCH4, can be used to quantify methane emissions. Maps of time-averaged satellite-derived XCH4 show regionally elevated methane over several methane source regions. In order to obtain methane emissions of these source regions we use a simple and fast data-driven method to estimate annual methane emissions and corresponding 1σ uncertainties directly from maps of annually averaged satellite XCH4. From theoretical considerations we expect that our method tends to underestimate emissions. When applying our method to high-resolution atmospheric methane simulations, we typically find agreement within the uncertainty range of our method (often 100 %) but also find that our method tends to underestimate emissions by typically about 40 %. To what extent these findings are model dependent needs to be assessed. We apply our method to an ensemble of satellite XCH4 data products consisting of two products from SCIAMACHY/ENVISAT and two products from TANSO-FTS/GOSAT covering the time period 2003-2014. We obtain annual emissions of four source areas: Four Corners in the south-western USA, the southern part of Central Valley, California, Azerbaijan, and Turkmenistan. We find that our estimated emissions are in good agreement with independently derived estimates for Four Corners and Azerbaijan. For the Central Valley and Turkmenistan our estimated annual emissions are higher compared to the EDGAR v4.2 anthropogenic emission inventory. For Turkmenistan we find on average about 50 % higher emissions with our annual emission uncertainty estimates overlapping with the EDGAR emissions. For the region around Bakersfield in the Central Valley we

  15. Estimation of Return Values of Wave Height: Consequences of Missing Observations

    ERIC Educational Resources Information Center

    Ryden, Jesper

    2008-01-01

    Extreme-value statistics is often used to estimate so-called return values (actually related to quantiles) for environmental quantities like wind speed or wave height. A basic method for estimation is the method of block maxima which consists in partitioning observations in blocks, where maxima from each block could be considered independent.…

  16. Investigation of Properties of Nanocomposite Polyimide Samples Obtained by Fused Deposition Modeling

    NASA Astrophysics Data System (ADS)

    Polyakov, I. V.; Vaganov, G. V.; Yudin, V. E.; Ivan'kova, E. M.; Popova, E. N.; Elokhovskii, V. Yu.

    2018-03-01

    Nanomodified polyimide samples were obtained by fused deposition modeling (FDM) using an experimental setup for 3D printing of highly heat-resistant plastics. The mechanical properties and structure of these samples were studied by viscosimetry, differential scanning calorimetry, and scanning electron microscopy. A comparative estimation of the mechanical properties of laboratory samples obtained from a nanocomposite based on heat-resistant polyetherimide by FDM and injection molding is presented.

  17. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  18. Modification of the Sandwich Estimator in Generalized Estimating Equations with Correlated Binary Outcomes in Rare Event and Small Sample Settings

    PubMed Central

    Rogers, Paul; Stoner, Julie

    2016-01-01

    Regression models for correlated binary outcomes are commonly fit using a Generalized Estimating Equations (GEE) methodology. GEE uses the Liang and Zeger sandwich estimator to produce unbiased standard error estimators for regression coefficients in large sample settings even when the covariance structure is misspecified. The sandwich estimator performs optimally in balanced designs when the number of participants is large, and there are few repeated measurements. The sandwich estimator is not without drawbacks; its asymptotic properties do not hold in small sample settings. In these situations, the sandwich estimator is biased downwards, underestimating the variances. In this project, a modified form for the sandwich estimator is proposed to correct this deficiency. The performance of this new sandwich estimator is compared to the traditional Liang and Zeger estimator as well as alternative forms proposed by Morel, Pan and Mancl and DeRouen. The performance of each estimator was assessed with 95% coverage probabilities for the regression coefficient estimators using simulated data under various combinations of sample sizes and outcome prevalence values with an Independence (IND), Autoregressive (AR) and Compound Symmetry (CS) correlation structure. This research is motivated by investigations involving rare-event outcomes in aviation data. PMID:26998504

  19. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  20. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  1. Entropy-based adaptive attitude estimation

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  2. Temporal variability patterns in solar radiation estimations

    NASA Astrophysics Data System (ADS)

    Vindel, José M.; Navarro, Ana A.; Valenzuela, Rita X.; Zarzalejo, Luis F.

    2016-06-01

    In this work, solar radiation estimations obtained from a satellite and a numerical weather prediction model in mainland Spain have been compared. Similar comparisons have been formerly carried out, but in this case, the methodology used is different: the temporal variability of both sources of estimation has been compared with the annual evolution of the radiation associated to the different study climate zones. The methodology is based on obtaining behavior patterns, using a Principal Component Analysis, following the annual evolution of solar radiation estimations. Indeed, the adjustment degree to these patterns in each point (assessed from maps of correlation) may be associated with the annual radiation variation (assessed from the interquartile range), which is associated, in turn, to different climate zones. In addition, the goodness of each estimation source has been assessed comparing it with data obtained from the radiation measurements in ground by pyranometers. For the study, radiation data from Satellite Application Facilities and data corresponding to the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts have been used.

  3. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  4. Comparison of two independent systematic reviews of trials of recombinant human bone morphogenetic protein-2 (rhBMP-2): the Yale Open Data Access Medtronic Project.

    PubMed

    Low, Jeffrey; Ross, Joseph S; Ritchie, Jessica D; Gross, Cary P; Lehman, Richard; Lin, Haiqun; Fu, Rongwei; Stewart, Lesley A; Krumholz, Harlan M

    2017-02-15

    It is uncertain whether the replication of systematic reviews, particularly those with the same objectives and resources, would employ similar methods and/or arrive at identical findings. We compared the results and conclusions of two concurrent systematic reviews undertaken by two independent research teams provided with the same objectives, resources, and individual participant-level data. Two centers in the USA and UK were each provided with participant-level data on 17 multi-site clinical trials of recombinant human bone morphogenetic protein-2 (rhBMP-2). The teams were blinded to each other's methods and findings until after publication. We conducted a retrospective structured comparison of the results of the two systematic reviews. The main outcome measures included (1) trial inclusion criteria; (2) statistical methods; (3) summary efficacy and risk estimates; and (4) conclusions. The two research teams' meta-analyses inclusion criteria were broadly similar but differed slightly in trial inclusion and research methodology. They obtained similar results in summary estimates of most clinical outcomes and adverse events. Center A incorporated all trials into summary estimates of efficacy and harms, while Center B concentrated on analyses stratified by surgical approach. Center A found a statistically significant, but small, benefit whereas Center B reported no advantage. In the analysis of harms, neither showed an increased cancer risk at 48 months, although Center B reported a significant increase at 24 months. Conclusions reflected these differences in summary estimates of benefit balanced with small but potentially important risk of harm. Two independent groups given the same research objectives, data, resources, funding, and time produced broad general agreement but differed in several areas. These differences, the importance of which is debatable, indicate the value of the availability of data to allow for more than a single approach and a single

  5. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  6. Deep Independence Network Analysis of Structural Brain Imaging: Application to Schizophrenia

    PubMed Central

    Castro, Eduardo; Hjelm, R. Devon; Plis, Sergey M.; Dinh, Laurent; Turner, Jessica A.; Calhoun, Vince D.

    2016-01-01

    Linear independent component analysis (ICA) is a standard signal processing technique that has been extensively used on neuroimaging data to detect brain networks with coherent brain activity (functional MRI) or covarying structural patterns (structural MRI). However, its formulation assumes that the measured brain signals are generated by a linear mixture of the underlying brain networks and this assumption limits its ability to detect the inherent nonlinear nature of brain interactions. In this paper, we introduce nonlinear independent component estimation (NICE) to structural MRI data to detect abnormal patterns of gray matter concentration in schizophrenia patients. For this biomedical application, we further addressed the issue of model regularization of nonlinear ICA by performing dimensionality reduction prior to NICE, together with an appropriate control of the complexity of the model and the usage of a proper approximation of the probability distribution functions of the estimated components. We show that our results are consistent with previous findings in the literature, but we also demonstrate that the incorporation of nonlinear associations in the data enables the detection of spatial patterns that are not identified by linear ICA. Specifically, we show networks including basal ganglia, cerebellum and thalamus that show significant differences in patients versus controls, some of which show distinct nonlinear patterns. PMID:26891483

  7. Food provisioning and parental status in songbirds: can occupancy models be used to estimate nesting performance?

    PubMed

    Corbani, Aude Catherine; Hachey, Marie-Hélène; Desrochers, André

    2014-01-01

    Indirect methods to estimate parental status, such as the observation of parental provisioning, have been problematic due to potential biases associated with imperfect detection. We developed a method to evaluate parental status based on a novel combination of parental provisioning observations and hierarchical modeling. In the summers of 2009 to 2011, we surveyed 393 sites, each on three to four consecutive days at Forêt Montmorency, Québec, Canada. We assessed parental status of 2331 adult songbirds based on parental food provisioning. To account for imperfect detection of parental status, we applied MacKenzie et al.'s (2002) two-state hierarchical model to obtain unbiased estimates of the proportion of sites with successfully nesting birds, and the proportion of adults with offspring. To obtain an independent evaluation of detection probability, we monitored 16 active nests in 2010 and conducted parental provisioning observations away from them. The probability of detecting food provisioning was 0.31 when using nest monitoring, a value within the 0.11 to 0.38 range that was estimated by two-state models. The proportion of adults or sites with broods approached 0.90 and varied depending on date during the sampling season and year, exemplifying the role of eastern boreal forests as highly productive nesting grounds for songbirds. This study offers a simple and effective sampling design for studying avian reproductive performance that could be implemented in national surveys such as breeding bird atlases.

  8. Food Provisioning and Parental Status in Songbirds: Can Occupancy Models Be Used to Estimate Nesting Performance?

    PubMed Central

    Corbani, Aude Catherine; Hachey, Marie-Hélène; Desrochers, André

    2014-01-01

    Indirect methods to estimate parental status, such as the observation of parental provisioning, have been problematic due to potential biases associated with imperfect detection. We developed a method to evaluate parental status based on a novel combination of parental provisioning observations and hierarchical modeling. In the summers of 2009 to 2011, we surveyed 393 sites, each on three to four consecutive days at Forêt Montmorency, Québec, Canada. We assessed parental status of 2331 adult songbirds based on parental food provisioning. To account for imperfect detection of parental status, we applied MacKenzie et al.'s (2002) two-state hierarchical model to obtain unbiased estimates of the proportion of sites with successfully nesting birds, and the proportion of adults with offspring. To obtain an independent evaluation of detection probability, we monitored 16 active nests in 2010 and conducted parental provisioning observations away from them. The probability of detecting food provisioning was 0.31 when using nest monitoring, a value within the 0.11 to 0.38 range that was estimated by two-state models. The proportion of adults or sites with broods approached 0.90 and varied depending on date during the sampling season and year, exemplifying the role of eastern boreal forests as highly productive nesting grounds for songbirds. This study offers a simple and effective sampling design for studying avian reproductive performance that could be implemented in national surveys such as breeding bird atlases. PMID:24999969

  9. Using Multitemporal Remote Sensing Imagery and Inundation Measures to Improve Land Change Estimates in Coastal Wetlands

    USGS Publications Warehouse

    Allen, Y.C.; Couvillion, B.R.; Barras, J.A.

    2012-01-01

    Remote sensing imagery can be an invaluable resource to quantify land change in coastal wetlands. Obtaining an accurate measure of land change can, however, be complicated by differences in fluvial and tidal inundation experienced when the imagery is captured. This study classified Landsat imagery from two wetland areas in coastal Louisiana from 1983 to 2010 into categories of land and water. Tide height, river level, and date were used as independent variables in a multiple regression model to predict land area in the Wax Lake Delta (WLD) and compare those estimates with an adjacent marsh area lacking direct fluvial inputs. Coefficients of determination from regressions using both measures of water level along with date as predictor variables of land extent in the WLD, were higher than those obtained using the current methodology which only uses date to predict land change. Land change trend estimates were also improved when the data were divided by time period. Water level corrected land gain in the WLD from 1983 to 2010 was 1 km 2 year -1, while rates in the adjacent marsh remained roughly constant. This approach of isolating environmental variability due to changing water levels improves estimates of actual land change in a dynamic system, so that other processes that may control delta development such as hurricanes, floods, and sediment delivery, may be further investigated. ?? 2011 Coastal and Estuarine Research Federation (outside the USA).

  10. Uptake and storage of anthropogenic CO2 in the pacific ocean estimated using two modeling approaches

    NASA Astrophysics Data System (ADS)

    Li, Yangchun; Xu, Yongfu

    2012-07-01

    A basin-wide ocean general circulation model (OGCM) of the Pacific Ocean is employed to estimate the uptake and storage of anthropogenic CO2 using two different simulation approaches. The simulation (named BIO) makes use of a carbon model with biological processes and full thermodynamic equations to calculate surface water partial pressure of CO2, whereas the other simulation (named PTB) makes use of a perturbation approach to calculate surface water partial pressure of anthropogenic CO2. The results from the two simulations agree well with the estimates based on observation data in most important aspects of the vertical distribution as well as the total inventory of anthropogenic carbon. The storage of anthropogenic carbon from BIO is closer to the observation-based estimate than that from PTB. The Revelle factor in 1994 obtained in BIO is generally larger than that obtained in PTB in the whole Pacific, except for the subtropical South Pacific. This, to large extent, leads to the difference in the surface anthropogenic CO2 concentration between the two runs. The relative difference in the annual uptake between the two runs is almost constant during the integration processes after 1850. This is probably not caused by dissolved inorganic carbon (DIC), but rather by a factor independent of time. In both runs, the rate of change in anthropogenic CO2 fluxes with time is consistent with the rate of change in the growth rate of atmospheric partial pressure of CO2.

  11. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  12. SURE Estimates for a Heteroscedastic Hierarchical Model

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.

    2014-01-01

    Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976

  13. 12 CFR 611.1250 - Preliminary exit fee estimate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... independently audited by a qualified public accountant. We may, in our discretion, waive the audit requirement... termination. Related expenses include, but are not limited to, legal services, accounting services, tax... institution and its stockholders. (ii) Subtract the dollar amount of estimated current and deferred tax...

  14. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  15. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  16. Time Series Analysis of Remote Sensing Observations for Citrus Crop Growth Stage and Evapotranspiration Estimation

    NASA Astrophysics Data System (ADS)

    Sawant, S. A.; Chakraborty, M.; Suradhaniwar, S.; Adinarayana, J.; Durbha, S. S.

    2016-06-01

    Satellite based earth observation (EO) platforms have proved capability to spatio-temporally monitor changes on the earth's surface. Long term satellite missions have provided huge repository of optical remote sensing datasets, and United States Geological Survey (USGS) Landsat program is one of the oldest sources of optical EO datasets. This historical and near real time EO archive is a rich source of information to understand the seasonal changes in the horticultural crops. Citrus (Mandarin / Nagpur Orange) is one of the major horticultural crops cultivated in central India. Erratic behaviour of rainfall and dependency on groundwater for irrigation has wide impact on the citrus crop yield. Also, wide variations are reported in temperature and relative humidity causing early fruit onset and increase in crop water requirement. Therefore, there is need to study the crop growth stages and crop evapotranspiration at spatio-temporal scale for managing the scarce resources. In this study, an attempt has been made to understand the citrus crop growth stages using Normalized Difference Time Series (NDVI) time series data obtained from Landsat archives (http://earthexplorer.usgs.gov/). Total 388 Landsat 4, 5, 7 and 8 scenes (from year 1990 to Aug. 2015) for Worldwide Reference System (WRS) 2, path 145 and row 45 were selected to understand seasonal variations in citrus crop growth. Considering Landsat 30 meter spatial resolution to obtain homogeneous pixels with crop cover orchards larger than 2 hectare area was selected. To consider change in wavelength bandwidth (radiometric resolution) with Landsat sensors (i.e. 4, 5, 7 and 8) NDVI has been selected to obtain continuous sensor independent time series. The obtained crop growth stage information has been used to estimate citrus basal crop coefficient information (Kcb). Satellite based Kcb estimates were used with proximal agrometeorological sensing system

  17. Value of the distant future: Model-independent results

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  18. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  19. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  20. Finite-size analysis of continuous-variable measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Xueying; Zhang, Yichen; Zhao, Yijia; Wang, Xiangyu; Yu, Song; Guo, Hong

    2017-10-01

    We study the impact of the finite-size effect on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocol, mainly considering the finite-size effect on the parameter estimation procedure. The central-limit theorem and maximum likelihood estimation theorem are used to estimate the parameters. We also analyze the relationship between the number of exchanged signals and the optimal modulation variance in the protocol. It is proved that when Charlie's position is close to Bob, the CV-MDI QKD protocol has the farthest transmission distance in the finite-size scenario. Finally, we discuss the impact of finite-size effects related to the practical detection in the CV-MDI QKD protocol. The overall results indicate that the finite-size effect has a great influence on the secret-key rate of the CV-MDI QKD protocol and should not be ignored.

  1. Exposure time independent summary statistics for assessment of drug dependent cell line growth inhibition.

    PubMed

    Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin

    2014-06-05

    In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological

  2. Exposure time independent summary statistics for assessment of drug dependent cell line growth inhibition

    PubMed Central

    2014-01-01

    Background In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves’ dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. Results First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is

  3. Effects of sampling strategy, detection probability, and independence of counts on the use of point counts

    USGS Publications Warehouse

    Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.

  4. Independent control of differently-polarized waves using anisotropic gradient-index metamaterials

    PubMed Central

    Ma, Hui Feng; Wang, Gui Zhen; Jiang, Wei Xiang; Cui, Tie Jun

    2014-01-01

    We propose a kind of anisotropic gradient-index (GRIN) metamaterials, which can be used to control differently-polarized waves independently. We show that two three- dimensional (3D) planar lenses made of such anisotropic GRIN metamaterials are able to make arbitrary beam deflections for the vertical (or horizontal) polarization but have no response to the horizontal (or vertical) polarization. Then the vertically- and horizontally-polarized waves are separated and controlled independently to deflect to arbitrarily different directions by designing the anisotropic GRIN planar lenses. We make experimental verifications of the lenses using such a special metamaterial, which has both electric and magnetic responses simultaneously to reach approximately equal permittivity and permeability. Hence excellent impedance matching is obtained between the GRIN planar lenses and the air. The measurement results demonstrate good performance on the independent controls of differently-polarized waves, as observed in the numerical simulations. PMID:25231412

  5. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  6. Utilization of accident databases and fuzzy sets to estimate frequency of HazMat transport accidents.

    PubMed

    Qiao, Yuanhua; Keren, Nir; Mannan, M Sam

    2009-08-15

    Risk assessment and management of transportation of hazardous materials (HazMat) require the estimation of accident frequency. This paper presents a methodology to estimate hazardous materials transportation accident frequency by utilizing publicly available databases and expert knowledge. The estimation process addresses route-dependent and route-independent variables. Negative binomial regression is applied to an analysis of the Department of Public Safety (DPS) accident database to derive basic accident frequency as a function of route-dependent variables, while the effects of route-independent variables are modeled by fuzzy logic. The integrated methodology provides the basis for an overall transportation risk analysis, which can be used later to develop a decision support system.

  7. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  8. Motivated independence? Implicit party identity predicts political judgments among self-proclaimed Independents.

    PubMed

    Hawkins, Carlee Beth; Nosek, Brian A

    2012-11-01

    Reporting an Independent political identity does not guarantee the absence of partisanship. Independents demonstrated considerable variability in relative identification with Republicans versus Democrats as measured by an Implicit Association Test (IAT; M = 0.10, SD = 0.47). To test whether this variation predicted political judgment, participants read a newspaper article describing two competing welfare (Study 1) or special education (Study 2) policies. The authors manipulated which policy was proposed by which party. Among self-proclaimed Independents, those who were implicitly Democratic preferred the liberal welfare plan, and those who were implicitly Republican preferred the conservative welfare plan. Regardless of the policy details, these implicit partisans preferred the policy proposed by "their" party, and this effect occurred more strongly for implicit than explicit plan preference. The authors suggest that implicitly partisan Independents may consciously override some partisan influence when making explicit political judgments, and Independents may identify as such to appear objective even when they are not.

  9. Estimating soil temperature using neighboring station data via multi-nonlinear regression and artificial neural network models.

    PubMed

    Bilgili, Mehmet; Sahin, Besir; Sangun, Levent

    2013-01-01

    The aim of this study is to estimate the soil temperatures of a target station using only the soil temperatures of neighboring stations without any consideration of the other variables or parameters related to soil properties. For this aim, the soil temperatures were measured at depths of 5, 10, 20, 50, and 100 cm below the earth surface at eight measuring stations in Turkey. Firstly, the multiple nonlinear regression analysis was performed with the "Enter" method to determine the relationship between the values of target station and neighboring stations. Then, the stepwise regression analysis was applied to determine the best independent variables. Finally, an artificial neural network (ANN) model was developed to estimate the soil temperature of a target station. According to the derived results for the training data set, the mean absolute percentage error and correlation coefficient ranged from 1.45% to 3.11% and from 0.9979 to 0.9986, respectively, while corresponding ranges of 1.685-3.65% and 0.9988-0.9991, respectively, were obtained based on the testing data set. The obtained results show that the developed ANN model provides a simple and accurate prediction to determine the soil temperature. In addition, the missing data at the target station could be determined within a high degree of accuracy.

  10. Scaling estimates of vegetation structure in Amazonian tropical forests using multi-angle MODIS observations

    PubMed Central

    de Moura, Yhasmin Mendes; Hilker, Thomas; Goncalves, Fabio Guimarães; Galvão, Lênio Soares; dos Santos, João Roberto; Lyapustin, Alexei; Maeda, Eduardo Eiji; de Jesus Silva, Camila Valéria

    2018-01-01

    Detailed knowledge of vegetation structure is required for accurate modelling of terrestrial ecosystems, but direct measurements of the three dimensional distribution of canopy elements, for instance from LiDAR, are not widely available. We investigate the potential for modelling vegetation roughness, a key parameter for climatological models, from directional scattering of visible and near-infrared (NIR) reflectance acquired from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS). We compare our estimates across different tropical forest types to independent measures obtained from: (1) airborne laser scanning (ALS), (2) spaceborne Geoscience Laser Altimeter System (GLAS)/ICESat, and (3) the spaceborne SeaWinds/QSCAT. Our results showed linear correlation between MODIS-derived anisotropy to ALS-derived entropy (r2= 0.54, RMSE=0.11), even in high biomass regions. Significant relationships were also obtained between MODIS-derived anisotropy and GLAS-derived entropy (0.52≤ r2≤ 0.61; p<0.05), with similar slopes and offsets found throughout the season, and RMSE between 0.26 and 0.30 (units of entropy). The relationships between the MODIS-derived anisotropy and backscattering measurements (σ0) from SeaWinds/QuikSCAT presented an r2 of 0.59 and a RMSE of 0.11. We conclude that multi-angular MODIS observations are suitable to extrapolate measures of canopy entropy across different forest types, providing additional estimates of vegetation structure in the Amazon. PMID:29618964

  11. Estimating Bacterial Production in Marine Waters from the Simultaneous Incorporation of Thymidine and Leucine

    PubMed Central

    Chin-Leo, Gerardo; Kirchman, David L.

    1988-01-01

    We examined the simultaneous incorporation of [3H]thymidine and [14C]leucine to obtain two independent indices of bacterial production (DNA and protein syntheses) in a single incubation. Incorporation rates of leucine estimated by the dual-label method were generally higher than those obtained by the single-label method, but the differences were small (dual/single = 1.1 ± 0.2 [mean ± standard deviation]) and were probably due to the presence of labeled leucyl-tRNA in the cold trichloroacetic acid-insoluble fraction. There were no significant differences in thymidine incorporation between dual- and single-label incubations (dual/ single = 1.03 ± 0.13). Addition of the two substrates in relatively large amounts (25 nM) did not apparently increase bacterial activity during short incubations (<5 h). With the dual-label method we found that thymidine and leucine incorporation rates covaried over depth profiles of the Chesapeake Bay. Estimates of bacterial production based on thymidine and leucine differed by less than 25%. Although the need for appropriate conversion factors has not been eliminated, the dual-label approach can be used to examine the variation in bacterial production while ensuring that the observed variation in incorporation rates is due to real changes in bacterial production rather than changes in conversion factors or introduction of other artifacts. PMID:16347706

  12. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  13. A General Approach for Estimating Scale Score Reliability for Panel Survey Data

    ERIC Educational Resources Information Center

    Biemer, Paul P.; Christ, Sharon L.; Wiesen, Christopher A.

    2009-01-01

    Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. Reliability estimation is usually an integral step to assess…

  14. Atmospheric Turbulence Estimates from a Pulsed Lidar

    NASA Technical Reports Server (NTRS)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  15. Observer variability in estimating numbers: An experiment

    USGS Publications Warehouse

    Erwin, R.M.

    1982-01-01

    Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.

  16. Flood of June 22-24, 2006, in North-Central Ohio, With Emphasis on the Cuyahoga River Near Independence

    USGS Publications Warehouse

    Sherwood, James M.; Ebner, Andrew D.; Koltun, G.F.; Astifan, Brian M.

    2007-01-01

    Heavy rains caused severe flooding on June 22-24, 2006, and damaged approximately 4,580 homes and 48 businesses in Cuyahoga County. Damage estimates in Cuyahoga County for the two days of flooding exceed $47 million; statewide damage estimates exceed $150 million. Six counties (Cuyahoga, Erie, Huron, Lucas, Sandusky, and Stark) in northeast Ohio were declared Federal disaster areas. One death, in Lorain County, was attributed to the flooding. The peak streamflow of 25,400 cubic feet per second and corresponding peak gage height of 23.29 feet were the highest recorded at the U.S. Geological Survey (USGS) streamflow-gaging station Cuyahoga River at Independence (04208000) since the gaging station began operation in 1922, exceeding the previous peak streamflow of 24,800 cubic feet per second that occurred on January 22, 1959. An indirect calculation of the peak streamflow was made by use of a step-backwater model because all roads leading to the gaging station were inundated during the flood and field crews could not reach the station to make a direct measurement. Because of a statistically significant and persistent positive trend in the annual-peak-streamflow time series for the Cuyahoga River at Independence, a method was developed and applied to detrend the annual-peak-streamflow time series prior to the traditional log-Pearson Type III flood-frequency analysis. Based on this analysis, the recurrence interval of the computed peak streamflow was estimated to be slightly less than 100 years. Peak-gage-height data, peak-streamflow data, and recurrence-interval estimates for the June 22-24, 2006, flood are tabulated for the Cuyahoga River at Independence and 10 other USGS gaging stations in north-central Ohio. Because flooding along the Cuyahoga River near Independence and Valley View was particularly severe, a study was done to document the peak water-surface profile during the flood from approximately 2 miles downstream from the USGS streamflow-gaging station at

  17. A hierarchical estimator development for estimation of tire-road friction coefficient.

    PubMed

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  18. A hierarchical estimator development for estimation of tire-road friction coefficient

    PubMed Central

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332

  19. The predictive information obtained by testing multiple software versions

    NASA Technical Reports Server (NTRS)

    Lee, Larry D.

    1987-01-01

    Multiversion programming is a redundancy approach to developing highly reliable software. In applications of this method, two or more versions of a program are developed independently by different programmers and the versions are combined to form a redundant system. One variation of this approach consists of developing a set of n program versions and testing the versions to predict the failure probability of a particular program or a system formed from a subset of the programs. The precision that might be obtained, and also the effect of programmer variability if predictions are made over repetitions of the process of generating different program versions, are examined.

  20. Estimating Canopy Dark Respiration for Crop Models

    NASA Technical Reports Server (NTRS)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  1. Estimate of Errors of Pressure Predictions Without Meteorological Forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1957-07-31

    Independent methods of estimating pressure were considered-- the range of application in height is from that of baro-fuzed tactical weapons (a few thousand feet) to that of the control of height of aircraft at high altitude (45,000 feet).

  2. Estimation of the Invisible Energy in Extensive Air Showers with the Data Collected by the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Mariazzi, Analisa

    The determination of the energy of primary cosmic rays from their extensive air showers using the fluorescence technique requires an estimation of the energy carried away by particles reaching the ground that do not deposit all their energy in the atmosphere. This estimation is typically made using Monte Carlo simulations and depends on the assumed primary particle mass and on model predictions for hadron-air collisions at high energies. In this work we review the method that the Pierre Auger Collaboration uses to obtain the invisible energy directly from hybrid events measured simultaneously with the fluorescence and the surface detectors of the Pierre Auger Observatory. As a corroboration of these results, a new method for the determination of the invisible energy using an independent data set is also presented. Both methods agree within systematic uncertainties, reducing significantly the biases related to differences between the high energy hadronic interaction models and data.

  3. Calibrating recruitment estimates for mourning doves from harvest age ratios

    USGS Publications Warehouse

    Miller, David A.; Otis, David L.

    2010-01-01

    We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in

  4. Online estimation of room reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.

    2003-04-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  5. An attempt to estimate students' workload.

    PubMed

    Pogacnik, M; Juznic, P; Kosorok-Drobnic, M; Pogacnik, A; Cestnik, V; Kogovsek, J; Pestevsek, U; Fernandes, Tito

    2004-01-01

    Following the recent introduction of the European Credit Transfer System (ECTS) into several European university programs, a new interest has developed in determining students' workload. ECTS credits are numerical values describing the student workload required to complete course units; ECTS has the potential to facilitate comparison and create transparency between institutional curricula. ECTS credits are frequently listed alongside institutional credits in course outlines and module summaries. Measuring student workload has been difficult; to a large extent, estimates are based only upon anecdotal and casual information. To gather more systematic information, we asked students at the Veterinary Faculty, University of Ljubljana, to estimate the actual total workload they committed to fulfill their coursework obligations for specific subjects in the veterinary degree program by reporting their attendance at defined contact hours and their estimated time for outside study, including the time required for examinations and other activities. Students also reported the final grades they received for these subjects. The results show that certain courses require much more work than others, independent of credit unit assignment. Generally, the courses with more contact hours tend also to demand more independent work; the best predictor of both actual student workload and student success is the amount of contact time in which they participate. The data failed to show any strong connection between students' total workload and grades they received; rather, they showed some evidence that regular presence at contact hours was the most positive influence on grades. Less frequent presence at lectures tended to indicate less time spent on independent study. It was also found that pre-clinical and clinical courses tended to require more work from students than other, more general subjects. While the present study does not provide conclusive evidence, it does indicate the need for

  6. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  7. No correlation between ultrasound placental grading at 31-34 weeks of gestation and a surrogate estimate of organ function at term obtained by stereological analysis.

    PubMed

    Yin, T T; Loughna, P; Ong, S S; Padfield, J; Mayhew, T M

    2009-08-01

    We test the experimental hypothesis that early changes in the ultrasound appearance of the placenta reflect poor or reduced placental function. The sonographic (Grannum) grade of placental maturity was compared to placental function as expressed by the morphometric oxygen diffusive conductance of the villous membrane. Ultrasonography was used to assess the Grannum grade of 32 placentas at 31-34 weeks of gestation. Indications for the scans included a history of previous fetal abnormalities, previous fetal growth problems or suspicion of IUGR. Placentas were classified from grade 0 (most immature) to grade III (most mature). We did not exclude smokers or complicated pregnancies as we aimed to correlate the early appearance of mature placentas with placental function. After delivery, microscopical fields on formalin-fixed, trichrome-stained histological sections of each placenta were obtained by multistage systematic uniform random sampling. Using design-based stereological methods, the exchange surface areas of peripheral (terminal and intermediate) villi and their fetal capillaries and the arithmetic and harmonic mean thicknesses of the villous membrane (maternal surface of villous trophoblast to adluminal surface of vascular endothelium) were estimated. An index of the variability in thickness of this membrane, and an estimate of its oxygen diffusive conductance, were derived secondarily as were estimates of the mean diameters and total lengths of villi and fetal capillaries. Group comparisons were drawn using analysis of variance. We found no significant differences in placental volume or composition or in the dimensions or diffusive conductances of the villous membrane. Subsequent exclusion of smokers did not alter these main findings. Grannum grades at 31-34 weeks of gestation appear not to provide reliable predictors of the functional capacity of the term placenta as expressed by the surrogate measure, morphometric diffusive conductance.

  8. [A method for obtaining redshifts of quasars based on wavelet multi-scaling feature matching].

    PubMed

    Liu, Zhong-Tian; Li, Xiang-Ru; Wu, Fu-Chao; Zhao, Yong-Heng

    2006-09-01

    The LAMOST project, the world's largest sky survey project being implemented in China, is expected to obtain 10(5) quasar spectra. The main objective of the present article is to explore methods that can be used to estimate the redshifts of quasar spectra from LAMOST. Firstly, the features of the broad emission lines are extracted from the quasar spectra to overcome the disadvantage of low signal-to-noise ratio. Then the redshifts of quasar spectra can be estimated by using the multi-scaling feature matching. The experiment with the 15, 715 quasars from the SDSS DR2 shows that the correct rate of redshift estimated by the method is 95.13% within an error range of 0. 02. This method was designed to obtain the redshifts of quasar spectra with relative flux and a low signal-to-noise ratio, which is applicable to the LAMOST data and helps to study quasars and the large-scale structure of the universe etc.

  9. Predictors of chain acquisition among independent dialysis facilities.

    PubMed

    Pozniak, Alyssa S; Hirth, Richard A; Banaszak-Holl, Jane; Wheeler, John R C

    2010-04-01

    To determine the predictors of chain acquisition among independent dialysis providers. Retrospective facility-level data combined from CMS Cost Reports, Medical Evidence Forms, Annual Facility Surveys, and claims for 1996-2003. Independent dialysis facilities' probability of acquisition by a dialysis chain (overall and by chain size) was estimated using a discrete time hazard rate model, controlling for financial and clinical performance, practice patterns, market factors, and other facility characteristics. The sample includes all U.S. freestanding dialysis facilities that report not being chain affiliated for at least 1 year between 1997 and 2003. Above-average costs and better quality outcomes are significant determinants of dialysis chain acquisition. Facilities in larger markets were more likely to be acquired by a chain. Furthermore, small dialysis chains have different acquisition strategies than large chains. Dialysis chains appear to employ a mix of turn-around and cream-skimming strategies. Poor financial health is a predictor of chain acquisition as in other health care sectors, but the increased likelihood of chain acquisition among higher quality facilities is unique to the dialysis industry. Significant differences among predictors of acquisition by small and large chains reinforce the importance of using a richer classification for chain status.

  10. Independent Predictors of Prognosis Based on Oral Cavity Squamous Cell Carcinoma Surgical Margins.

    PubMed

    Buchakjian, Marisa R; Ginader, Timothy; Tasche, Kendall K; Pagedar, Nitin A; Smith, Brian J; Sperry, Steven M

    2018-05-01

    Objective To conduct a multivariate analysis of a large cohort of oral cavity squamous cell carcinoma (OCSCC) cases for independent predictors of local recurrence (LR) and overall survival (OS), with emphasis on the relationship between (1) prognosis and (2) main specimen permanent margins and intraoperative tumor bed frozen margins. Study Design Retrospective cohort study. Setting Tertiary academic head and neck cancer program. Subjects and Methods This study included 426 patients treated with OCSCC resection between 2005 and 2014 at University of Iowa Hospitals and Clinics. Patients underwent excision of OCSCC with intraoperative tumor bed frozen margin sampling and main specimen permanent margin assessment. Multivariate analysis of the data set to predict LR and OS was performed. Results Independent predictors of LR included nodal involvement, histologic grade, and main specimen permanent margin status. Specifically, the presence of a positive margin (odds ratio, 6.21; 95% CI, 3.3-11.9) or <1-mm/carcinoma in situ margin (odds ratio, 2.41; 95% CI, 1.19-4.87) on the main specimen was an independent predictor of LR, whereas intraoperative tumor bed margins were not predictive of LR on multivariate analysis. Similarly, independent predictors of OS on multivariate analysis included nodal involvement, extracapsular extension, and a positive main specimen margin. Tumor bed margins did not independently predict OS. Conclusion The main specimen margin is a strong independent predictor of LR and OS on multivariate analysis. Intraoperative tumor bed frozen margins do not independently predict prognosis. We conclude that emphasis should be placed on evaluating the main specimen margins when estimating prognosis after OCSCC resection.

  11. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  12. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different

  13. Metallic nano-structures for polarization-independent multi-spectral filters

    NASA Astrophysics Data System (ADS)

    Tang, Yongan; Vlahovic, Branislav; Brady, David Jones

    2011-05-01

    Cross-shaped-hole arrays (CSHAs) are selected for diminishing the polarization-dependent transmission differences of incident plane waves. We investigate the light transmission spectrum of the CSHAs in a thin gold film over a wide range of features. It is observed that two well-separated and high transmission efficiency peaks could be obtained by designing the parameters in the CSHAs for both p-polarized and s-polarized waves; and a nice transmission band-pass is also observed by specific parameters of a CSHA too. It implicates the possibility to obtain a desired polarization-independent transmission spectrum from the CSHAs by designing their parameters. These findings provide potential applications of the metallic nano-structures in optical filters, optical band-pass, optical imaging, optical sensing, and biosensors.

  14. Figure-ground segmentation based on class-independent shape priors

    NASA Astrophysics Data System (ADS)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  15. Attitude-Independent Magnetometer Calibration for Spin-Stabilized Spacecraft

    NASA Technical Reports Server (NTRS)

    Natanson, Gregory

    2005-01-01

    The paper describes a three-step estimator to calibrate a Three-Axis Magnetometer (TAM) using TAM and slit Sun or star sensor measurements. In the first step, the Calibration Utility forms a loss function from the residuals of the magnitude of the geomagnetic field. This loss function is minimized with respect to biases, scale factors, and nonorthogonality corrections. The second step minimizes residuals of the projection of the geomagnetic field onto the spin axis under the assumption that spacecraft nutation has been suppressed by a nutation damper. Minimization is done with respect to various directions of the body spin axis in the TAM frame. The direction of the spin axis in the inertial coordinate system required for the residual computation is assumed to be unchanged with time. It is either determined independently using other sensors or included in the estimation parameters. In both cases all estimation parameters can be found using simple analytical formulas derived in the paper. The last step is to minimize a third loss function formed by residuals of the dot product between the geomagnetic field and Sun or star vector with respect to the misalignment angle about the body spin axis. The method is illustrated by calibrating TAM for the Fast Auroral Snapshot Explorer (FAST) using in-flight TAM and Sun sensor data. The estimated parameters include magnetic biases, scale factors, and misalignment angles of the spin axis in the TAM frame. Estimation of the misalignment angle about the spin axis was inconclusive since (at least for the selected time interval) the Sun vector was about 15 degrees from the direction of the spin axis; as a result residuals of the dot product between the geomagnetic field and Sun vectors were to a large extent minimized as a by-product of the second step.

  16. Myth or Truth: Independence Day.

    ERIC Educational Resources Information Center

    Gardner, Traci

    Most Americans think of the Fourth of July as Independence Day, but is it really the day the U.S. declared and celebrated independence? By exploring myths and truths surrounding Independence Day, this lesson asks students to think critically about commonly believed stories regarding the beginning of the Revolutionary War and the Independence Day…

  17. Cary Potter on Independent Education

    ERIC Educational Resources Information Center

    Potter, Cary

    1978-01-01

    Cary Potter was President of the National Association of Independent Schools from 1964-1978. As he leaves NAIS he gives his views on education, on independence, on the independent school, on public responsibility, on choice in a free society, on educational change, and on the need for collective action by independent schools. (Author/RK)

  18. Analysis of short pulse laser altimetry data obtained over horizontal path

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Tsai, B. M.; Gardner, C. S.

    1983-01-01

    Recent pulsed measurements of atmospheric delay obtained by ranging to the more realistic targets including a simulated ocean target and an extended plate target are discussed. These measurements are used to estimate the expected timing accuracy of a correlation receiver system. The experimental work was conducted using a pulsed two color laser altimeter.

  19. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  20. Tracking the global generation and exports of e-waste. Do existing estimates add up?

    PubMed

    Breivik, Knut; Armitage, James M; Wania, Frank; Jones, Kevin C

    2014-01-01

    The transport of discarded electronic and electrical appliances (e-waste) to developing regions has received considerable attention, but it is difficult to assess the significance of this issue without a quantitative understanding of the amounts involved. The main objective of this study is to track the global transport of e-wastes by compiling and constraining existing estimates of the amount of e-waste generated domestically in each country MGEN, exported from countries belonging to the Organization for Economic Cooperation and Development (OECD) MEXP, and imported in countries outside of the OECD MIMP. Reference year is 2005 and all estimates are given with an uncertainty range. Estimates of MGEN obtained by apportioning a global total of ∼ 35,000 kt (range 20,000-50,000 kt) based on a nation's gross domestic product agree well with independent estimates of MGEN for individual countries. Import estimates MIMP to the countries believed to be the major recipients of e-waste exports from the OECD globally (China, India, and five West African countries) suggests that ∼ 5,000 kt (3,600 kt-7,300 kt) may have been imported annually to these non-OECD countries alone, which represents ∼ 23% (17%-34%) of the amounts of e-waste generated domestically within the OECD. MEXP for each OECD country is then estimated by applying this fraction of 23% to its MGEN. By allocating each country's MGEN, MIMP, MEXP and MNET = MGEN + MIMP - MEXP, we can map the global generation and flows of e-waste from OECD to non-OECD countries. While significant uncertainties remain, we note that estimated import into seven non-OECD countries alone are often at the higher end of estimates of exports from OECD countries.

  1. Score tests for independence in semiparametric competing risks models.

    PubMed

    Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul

    2009-12-01

    A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.

  2. Estimating discharge in rivers using remotely sensed hydraulic information

    USGS Publications Warehouse

    Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.

    2005-01-01

    A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.

  3. Linking occupancy surveys with habitat characteristics to estimate abundance and distribution in an endangered cryptic bird

    USGS Publications Warehouse

    Crampton, Lisa H.; Brinck, Kevin W.; Pias, Kyle E.; Heindl, Barbara A. P.; Savre, Thomas; Diegmann, Julia S.; Paxton, Eben H.

    2017-01-01

    Accurate estimates of the distribution and abundance of endangered species are crucial to determine their status and plan recovery options, but such estimates are often difficult to obtain for species with low detection probabilities or that occur in inaccessible habitats. The Puaiohi (Myadestes palmeri) is a cryptic species endemic to Kauaʻi, Hawai‘i, and restricted to high elevation ravines that are largely inaccessible. To improve current population estimates, we developed an approach to model distribution and abundance of Puaiohi across their range by linking occupancy surveys to habitat characteristics, territory density, and landscape attributes. Occupancy per station ranged from 0.17 to 0.82, and was best predicted by the number and vertical extent of cliffs, cliff slope, stream width, and elevation. To link occupancy estimates with abundance, we used territory mapping data to estimate the average number of territories per survey station (0.44 and 0.66 territories per station in low and high occupancy streams, respectively), and the average number of individuals per territory (1.9). We then modeled Puaiohi occupancy as a function of two remote-sensed measures of habitat (stream sinuosity and elevation) to predict occupancy across its entire range. We combined predicted occupancy with estimates of birds per station to produce a global population estimate of 494 (95% CI 414–580) individuals. Our approach is a model for using multiple independent sources of information to accurately track population trends, and we discuss future directions for modeling abundance of this, and other, rare species.

  4. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  5. Optimization of attenuation estimation in reflection for in vivo human dermis characterization at 20 MHz.

    PubMed

    Fournier, Céline; Bridal, S Lori; Coron, Alain; Laugier, Pascal

    2003-04-01

    In vivo skin attenuation estimators must be applicable to backscattered radio frequency signals obtained in a pulse-echo configuration. This work compares three such estimators: short-time Fourier multinarrowband (MNB), short-time Fourier centroid shift (FC), and autoregressive centroid shift (ARC). All provide estimations of the attenuation slope (beta, dB x cm(-1) x MHz(-1)); MNB also provides an independent estimation of the mean attenuation level (IA, dB x cm(-1)). Practical approaches are proposed for data windowing, spectral variance characterization, and bandwidth selection. Then, based on simulated data, FC and ARC were selected as the best (compromise between bias and variance) attenuation slope estimators. The FC, ARC, and MNB were applied to in vivo human skin data acquired at 20 MHz to estimate betaFC, betaARC, and IA(MNB), respectively (without diffraction correction, between 11 and 27 MHz). Lateral heterogeneity had less effect and day-to-day reproducibility was smaller for IA than for beta. The IA and betaARC were dependent on pressure applied to skin during acquisition and IA on room and skin-surface temperatures. Negative values of IA imply that IA and beta may be influenced not only by skin's attenuation but also by structural heterogeneity across dermal depth. Even so, IA was correlated to subject age and IA, betaFC, and betaARC were dependent on subject gender. Thus, in vivo attenuation measurements reveal interesting variations with subject age and gender and thus appeared promising to detect skin structure modifications.

  6. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    PubMed

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  7. Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2008-01-01

    Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.

  8. Predictors of Independent Aging and Survival: A 16-Year Follow-Up Report in Octogenarian Men.

    PubMed

    Franzon, Kristin; Byberg, Liisa; Sjögren, Per; Zethelius, Björn; Cederholm, Tommy; Kilander, Lena

    2017-09-01

    To examine the longitudinal associations between aging with preserved functionality, i.e. independent aging and survival, and lifestyle variables, dietary pattern and cardiovascular risk factors. Cohort study. Uppsala Longitudinal Study of Adult Men, Sweden. Swedish men (n = 1,104) at a mean age of 71 (range 69.4-74.1) were investigated, 369 of whom were evaluated for independent aging 16 years later, at a mean age of 87 (range 84.8-88.9). A questionnaire was used to obtain information on lifestyle, including education, living conditions, and physical activity. Adherence to a Mediterranean-like diet was assessed according to a modified Mediterranean Diet Score derived from 7-day food records. Cardiovascular risk factors were measured. Independent aging at a mean age of 87 was defined as lack of diagnosed dementia, a Mini-Mental State Examination score of 25 or greater, not institutionalized, independence in personal activities of daily living, and ability to walk outdoors alone. Complete survival data at age 85 were obtained from the Swedish Cause of Death Register. Fifty-seven percent of the men survived to age 85, and 75% of the participants at a mean age of 87 displayed independent aging. Independent aging was associated with never smoking (vs current) (odds ratio (OR) = 2.20, 95% confidence interval (CI) = 1.05-4.60) and high (vs low) adherence to a Mediterranean-like diet (OR = 2.69, 95% CI = 1.14-6.80). Normal weight or overweight and waist circumference of 102 cm or less were also associated with independent aging. Similar associations were observed with survival. Lifestyle factors such as never smoking, maintaining a healthy diet, and not being obese at age 71 were associated with survival and independent aging at age 85 and older in men. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.

  9. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  10. Independent evaluation of the SNODAS snow depth product using regional-scale lidar-derived measurements

    NASA Astrophysics Data System (ADS)

    Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.

    2015-01-01

    Repeated light detection and ranging (lidar) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 lidar-derived data set of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the conterminous United States. Independent validation data are scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation data set with substantial geographic coverage. Within 12 distinctive 500 × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 lidar acquisitions. This supplied a data set for constraining the uncertainty of upscaled lidar estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled lidar snow depths were then compared to the SNODAS estimates over the entire study area for the dates of the lidar flights. The remotely sensed snow depths provided a more spatially continuous comparison data set and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between lidar observations and SNODAS estimates were most drastic, providing insight into the causal influences of natural processes on model uncertainty.

  11. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    NASA Astrophysics Data System (ADS)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  12. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  13. Estimation in SEM: A Concrete Example

    ERIC Educational Resources Information Center

    Ferron, John M.; Hess, Melinda R.

    2007-01-01

    A concrete example is used to illustrate maximum likelihood estimation of a structural equation model with two unknown parameters. The fitting function is found for the example, as are the vector of first-order partial derivatives, the matrix of second-order partial derivatives, and the estimates obtained from each iteration of the Newton-Raphson…

  14. Validity, reliability and support for implementation of independence-scaled procedural assessment in laparoscopic surgery.

    PubMed

    Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N

    2016-06-01

    There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs). An independence-scaled procedural assessment was created by linking the procedural key steps of the laparoscopic cholecystectomy to an independence scale. Subtitled and blinded videos of a novice, an intermediate and an almost competent trainee, were evaluated with GRSs (OSATS and GOALS) and the independence-scaled procedural assessment by seven surgeons, three senior trainees and six scrub nurses. Participants received a short introduction to the GRSs and independence-scaled procedural assessment before assessment. The validity was estimated with the Friedman and Wilcoxon test and the reliability with the intra-class correlation coefficient (ICC). A questionnaire was used to evaluate user opinion. Independence-scaled procedural assessment and GRS scores improved significantly with surgical experience (OSATS p = 0.001, GOALS p < 0.001, independence-scaled procedural assessment p < 0.001). The ICCs of the OSATS, GOALS and independence-scaled procedural assessment were 0.78, 0.74 and 0.84, respectively, among surgeons. The ICCs increased when the ratings of scrub nurses were added to those of the surgeons. The independence-scaled procedural assessment was not considered more of an administrative burden than the GRSs (p = 0.692). A procedural assessment created by combining procedural key steps to an independence scale is a valid, reliable and acceptable assessment instrument in surgery. In contrast to the GRSs, the reliability of the independence-scaled procedural assessment exceeded the threshold of 0.8, indicating that it can also be used for summative assessment. It furthermore seems that scrub nurses can assess the operative competence of surgical trainees.

  15. Applicability of the independence principle to subsonic turbulent flow over a swept rearward-facing step

    NASA Technical Reports Server (NTRS)

    Selby, G. V.

    1983-01-01

    Prandtl (1946) has concluded that for yawed laminar incompressible flows the streamwise flow is independent of the spanwise flow. However, Ashkenas and Riddell (1955) have reported that for turbulent flow the 'independence principle' does not apply to yawed flat plates. On the other hand, it was also found that this principle may be applicable to many turbulent flows. As the sweep angle is increased, a sweep angle is reached which defines the interval over which the 'independence principle' is valid. The results obtained in the present investigation indicate the magnitude of the critical angle for subsonic turbulent flow over a swept rearward-facing step.

  16. LACIE large area acreage estimation. [United States of America

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)

    1979-01-01

    A sample wheat acreage for a large area is obtained by multiplying its small grains acreage estimate as computed by the classification and mensuration subsystem by the best available ratio of wheat to small grains acreages obtained from historical data. In the United States, as in other countries with detailed historical data, an additional level of aggregation was required because sample allocation was made at the substratum level. The essential features of the estimation procedure for LACIE countries are included along with procedures for estimating wheat acreage in the United States.

  17. The Improved Estimation of Ratio of Two Population Proportions

    ERIC Educational Resources Information Center

    Solanki, Ramkrishna S.; Singh, Housila P.

    2016-01-01

    In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…

  18. Revised motion estimation algorithm for PROPELLER MRI.

    PubMed

    Pipe, James G; Gibbs, Wende N; Li, Zhiqiang; Karis, John P; Schar, Michael; Zwart, Nicholas R

    2014-08-01

    To introduce a new algorithm for estimating data shifts (used for both rotation and translation estimates) for motion-corrected PROPELLER MRI. The method estimates shifts for all blades jointly, emphasizing blade-pair correlations that are both strong and more robust to noise. The heads of three volunteers were scanned using a PROPELLER acquisition while they exhibited various amounts of motion. All data were reconstructed twice, using motion estimates from the original and new algorithm. Two radiologists independently and blindly compared 216 image pairs from these scans, ranking the left image as substantially better or worse than, slightly better or worse than, or equivalent to the right image. In the aggregate of 432 scores, the new method was judged substantially better than the old method 11 times, and was never judged substantially worse. The new algorithm compared favorably with the old in its ability to estimate bulk motion in a limited study of volunteer motion. A larger study of patients is planned for future work. Copyright © 2013 Wiley Periodicals, Inc.

  19. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  20. Characteristics of Rural Communities with a Sole, Independently Owned Pharmacy.

    PubMed

    Nattinger, Matthew; Ullrich, Fred; Mueller, Keith J

    2015-04-01

    Prior RUPRI Center policy briefs have described the role of rural pharmacies in providing many essential clinical services (in addition to prescription and nonprescription medications), such as blood pressure monitoring, immunizations, and diabetes counseling, and the adverse effects of Medicare Part D negotiated networks on the financial viability of rural pharmacies.1 Because rural pharmacies play such a broad role in health care delivery, pharmacy closures can sharply reduce access to essential health care services in rural and underserved communities. These closures are of particular concern in rural areas served by a sole, independently owned pharmacy (i.e., a pharmacy unaffiliated with a chain or franchise). This policy brief characterizes the population of rural areas served by a sole, independently owned pharmacy. Dependent on a sole pharmacy, these areas are at highest risk to lose access to many essential clinical services. Key Findings. (1) In 2014 over 2.7 million people lived in 663 rural communities served by a sole, independently owned pharmacy. (2) More than one-quarter of these residents (27.9 percent) were living below 150 percent of the federal poverty level. (3) Based on estimates from 2012, a substantial portion of the residents of these areas were dependent on public insurance (i.e., Medicare and/or Medicaid, 20.5 percent) or were uninsured (15.0 percent). (4) If the sole, independent retail pharmacy in these communities were to close, the next closest retail pharmacy would be over 10 miles away for a majority of rural communities (69.7 percent).

  1. Biomass Estimation for some Shrubs from Northeastern Minnesota

    Treesearch

    David F. Grigal; Lewis F. Ohmann

    1977-01-01

    Biomass prediction equations were developed for 23 northeastern Minnesota shrub species. The allowmetric function was used to predict leaf, current annual woody twig, stem, and total woody biomass (dry grass), using stem diameter class estimated to the nearest 0.25 cm class at 15 cm above ground level as the independent variable.

  2. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  3. Estimating residual fault hitting rates by recapture sampling

    NASA Technical Reports Server (NTRS)

    Lee, Larry; Gupta, Rajan

    1988-01-01

    For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.

  4. Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.

    PubMed

    Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison

    2016-11-01

    MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.

  5. Juvenile body mass estimation: A methodological evaluation.

    PubMed

    Cowgill, Libby

    2018-02-01

    Two attempts have been made to develop body mass prediction formulae specifically for immature remains: Ruff (Ruff, C.C., 2007, Body size prediction from juvenile skeletal remains. American Journal Physical Anthropology 133, 698-716) and Robbins et al. (Robbins, G., Sciulli, P.W., Blatt, S.H., 2010. Estimating body mass in subadult human skeletons. American Journal Physical Anthropology 143, 146-150). While both were developed from the same reference population, they differ in their independent variable selection: Ruff (2008) used measures of metaphyseal and articular surface size to predict body mass in immature remains, whereas Robbins et al. (2010) relied on cross-sectional properties. Both methods perform well on independent testing samples; however, differences between the two methods exist in the predicted values. This research evaluates the differences in the body mass estimates from these two methods in seven geographically diverse skeletal samples under the age of 18 (n = 461). The purpose of this analysis is not to assess which method performs with greater accuracy or precision; instead, differences between the two methods are used as a heuristic device to focus attention on the unique challenges affecting the prediction of immature body mass estimates in particular. The two methods differ by population only in some cases, which may be a reflection of activity variation or nutritional status. In addition, cross-sectional properties almost always produce higher estimates than metaphyseal surface size across all age categories. This highlights the difficulty in teasing apart information related to body mass from that relevant to loading, particularly when the original reference population is urban/industrial. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Acute health impacts of airborne particles estimated from satellite remote sensing.

    PubMed

    Wang, Zhaoxi; Liu, Yang; Hu, Mu; Pan, Xiaochuan; Shi, Jing; Chen, Feng; He, Kebin; Koutrakis, Petros; Christiani, David C

    2013-01-01

    Satellite-based remote sensing provides a unique opportunity to monitor air quality from space at global, continental, national and regional scales. Most current research focused on developing empirical models using ground measurements of the ambient particulate. However, the application of satellite-based exposure assessment in environmental health is still limited, especially for acute effects, because the development of satellite PM(2.5) model depends on the availability of ground measurements. We tested the hypothesis that MODIS AOD (aerosol optical depth) exposure estimates, obtained from NASA satellites, are directly associated with daily health outcomes. Three independent healthcare databases were used: unscheduled outpatient visits, hospital admissions, and mortality collected in Beijing metropolitan area, China during 2006. We use generalized linear models to compare the short-term effects of air pollution assessed by ground monitoring (PM(10)) with adjustment of absolute humidity (AH) and AH-calibrated AOD. Across all databases we found that both AH-calibrated AOD and PM(10) (adjusted by AH) were consistently associated with elevated daily events on the current day and/or lag days for cardiovascular diseases, ischemic heart diseases, and COPD. The relative risks estimated by AH-calibrated AOD and PM(10) (adjusted by AH) were similar. Additionally, compared to ground PM(10), we found that AH-calibrated AOD had narrower confidence intervals for all models and was more robust in estimating the current day and lag day effects. Our preliminary findings suggested that, with proper adjustment of meteorological factors, satellite AOD can be used directly to estimate the acute health impacts of ambient particles without prior calibrating to the sparse ground monitoring networks. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Stroke Location Is an Independent Predictor of Cognitive Outcome.

    PubMed

    Munsch, Fanny; Sagnier, Sharmila; Asselineau, Julien; Bigourdan, Antoine; Guttmann, Charles R; Debruxelles, Sabrina; Poli, Mathilde; Renou, Pauline; Perez, Paul; Dousset, Vincent; Sibon, Igor; Tourdias, Thomas

    2016-01-01

    On top of functional outcome, accurate prediction of cognitive outcome for stroke patients is an unmet need with major implications for clinical management. We investigated whether stroke location may contribute independent prognostic value to multifactorial predictive models of functional and cognitive outcomes. Four hundred twenty-eight consecutive patients with ischemic stroke were prospectively assessed with magnetic resonance imaging at 24 to 72 hours and at 3 months for functional outcome using the modified Rankin Scale and cognitive outcome using the Montreal Cognitive Assessment (MoCA). Statistical maps of functional and cognitive eloquent regions were derived from the first 215 patients (development sample) using voxel-based lesion-symptom mapping. We used multivariate logistic regression models to study the influence of stroke location (number of eloquent voxels from voxel-based lesion-symptom mapping maps), age, initial National Institutes of Health Stroke Scale and stroke volume on modified Rankin Scale and MoCA. The second part of our cohort was used as an independent replication sample. In univariate analyses, stroke location, age, initial National Institutes of Health Stroke Scale, and stroke volume were all predictive of poor modified Rankin Scale and MoCA. In multivariable analyses, stroke location remained the strongest independent predictor of MoCA and significantly improved the prediction compared with using only age, initial National Institutes of Health Stroke Scale, and stroke volume (area under the curve increased from 0.697-0.771; difference=0.073; 95% confidence interval, 0.008-0.155). In contrast, stroke location did not persist as independent predictor of modified Rankin Scale that was mainly driven by initial National Institutes of Health Stroke Scale (area under the curve going from 0.840 to 0.835). Similar results were obtained in the replication sample. Stroke location is an independent predictor of cognitive outcome (MoCA) at 3

  8. Estimating North American background ozone in U.S. surface air with two independent global models: Variability, uncertainties, and recommendations

    EPA Science Inventory

    Accurate estimates for North American background (NAB) ozone (O3) in surface air over the United States are needed for setting and implementing an attainable national O3 standard. These estimates rely on simulations with atmospheric chemistry-transport models that set North Amer...

  9. Noise normalization and windowing functions for VALIDAR in wind parameter estimation

    NASA Astrophysics Data System (ADS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Li, Zhiwen

    2006-05-01

    The wind parameter estimates from a state-of-the-art 2-μm coherent lidar system located at NASA Langley, Virginia, named VALIDAR (validation lidar), were compared after normalizing the noise by its estimated power spectra via the periodogram and the linear predictive coding (LPC) scheme. The power spectra and the Doppler shift estimates were the main parameter estimates for comparison. Different types of windowing functions were implemented in VALIDAR data processing algorithm and their impact on the wind parameter estimates was observed. Time and frequency independent windowing functions such as Rectangular, Hanning, and Kaiser-Bessel and time and frequency dependent apodized windowing function were compared. The briefing of current nonlinear algorithm development for Doppler shift correction subsequently follows.

  10. Efficient reporting of the estimated glomerular filtration rate without height in pediatric patients with cancer.

    PubMed

    Jeong, Tae-Dong; Cho, Eun-Jung; Lee, Woochang; Chun, Sail; Hong, Ki-Sook; Min, Won-Ki

    2017-10-26

    The updated bedside Schwartz equation requires constant, serum creatinine concentration and height measurements to calculate the estimated glomerular filtration rate (eGFR) in pediatric patients. Unlike the serum creatinine levels, obtaining height information from the laboratory information system (LIS) is not always possible in a clinical laboratory. Recently, the height-independent eGFR equation, the full age spectrum (FAS) equation, has been introduced. We evaluated the performance of height-independent eGFR equation in Korean children with cancer. A total of 250 children who underwent chromium-51-ethylenediamine tetra acetic-acid (51Cr-EDTA)-based glomerular filtration rate (GFR) measurements were enrolled. The 51Cr-EDTA GFR was used as the reference GFR. The bias (eGFR - measured GFR), precision (root mean square error [RMSE]) and accuracy (P30) of the FAS equations were compared to those of the updated Schwartz equation. P30 was defined as the percentage of patients whose eGFR was within ±30% of the measured GFR. The FAS equation showed significantly lower bias (mL/min/1.73 m2) than the updated Schwartz equation (4.2 vs. 8.7, p<0.001). The RMSE and P30 were: updated Schwartz of 43.8 and 64.4%, respectively, and FAS of 42.7 and 66.8%, respectively. The height-independent eGFR-FAS equation was less biased and as accurate as the updated Schwartz equation in Korean children. The use of the height-independent eGFR equation will allow for efficient reporting of eGFR through the LIS in clinical laboratories.

  11. Constructing a cosmological model-independent Hubble diagram of type Ia supernovae with cosmic chronometers

    NASA Astrophysics Data System (ADS)

    Li, Zhengxiang; Gonzalez, J. E.; Yu, Hongwei; Zhu, Zong-Hong; Alcaniz, J. S.

    2016-02-01

    We apply two methods, i.e., the Gaussian processes and the nonparametric smoothing procedure, to reconstruct the Hubble parameter H (z ) as a function of redshift from 15 measurements of the expansion rate obtained from age estimates of passively evolving galaxies. These reconstructions enable us to derive the luminosity distance to a certain redshift z , calibrate the light-curve fitting parameters accounting for the (unknown) intrinsic magnitude of type Ia supernova (SNe Ia), and construct cosmological model-independent Hubble diagrams of SNe Ia. In order to test the compatibility between the reconstructed functions of H (z ), we perform a statistical analysis considering the latest SNe Ia sample, the so-called joint light-curve compilation. We find that, for the Gaussian processes, the reconstructed functions of Hubble parameter versus redshift, and thus the following analysis on SNe Ia calibrations and cosmological implications, are sensitive to prior mean functions. However, for the nonparametric smoothing method, the reconstructed functions are not dependent on initial guess models, and consistently require high values of H0, which are in excellent agreement with recent measurements of this quantity from Cepheids and other local distance indicators.

  12. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  13. Application of a hybrid model to reduce bias and improve precision in population estimates for elk (Cervus elaphus) inhabiting a cold desert ecosystem

    USGS Publications Warehouse

    Schoenecker, Kathryn A.; Lubow, Bruce C.

    2016-01-01

    Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final

  14. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate

  15. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  16. Evaluating Remotely-Sensed Surface Soil Moisture Estimates Using Triple Collocation

    USDA-ARS?s Scientific Manuscript database

    Recent work has demonstrated the potential of enhancing remotely-sensed surface soil moisture validation activities through the application of triple collocation techniques which compare time series of three mutually independent geophysical variable estimates in order to acquire the root-mean-square...

  17. Online Kinematic and Dynamic-State Estimation for Constrained Multibody Systems Based on IMUs

    PubMed Central

    Torres-Moreno, José Luis; Blanco-Claraco, José Luis; Giménez-Fernández, Antonio; Sanjurjo, Emilio; Naya, Miguel Ángel

    2016-01-01

    This article addresses the problems of online estimations of kinematic and dynamic states of a mechanism from a sequence of noisy measurements. In particular, we focus on a planar four-bar linkage equipped with inertial measurement units (IMUs). Firstly, we describe how the position, velocity, and acceleration of all parts of the mechanism can be derived from IMU signals by means of multibody kinematics. Next, we propose the novel idea of integrating the generic multibody dynamic equations into two variants of Kalman filtering, i.e., the extended Kalman filter (EKF) and the unscented Kalman filter (UKF), in a way that enables us to handle closed-loop, constrained mechanisms, whose state space variables are not independent and would normally prevent the direct use of such estimators. The proposal in this work is to apply those estimators over the manifolds of allowed positions and velocities, by means of estimating a subset of independent coordinates only. The proposed techniques are experimentally validated on a testbed equipped with encoders as a means of establishing the ground-truth. Estimators are run online in real-time, a feature not matched by any previous procedure of those reported in the literature on multibody dynamics. PMID:26959027

  18. EFFECTS OF BIASES IN VIRIAL MASS ESTIMATION ON COSMIC SYNCHRONIZATION OF QUASAR ACCRETION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhardt, Charles L.

    2011-09-01

    Recent work using virial mass estimates and the quasar mass-luminosity plane has yielded several new puzzles regarding quasar accretion, including a sub-Eddington boundary (SEB) on most quasar accretion, near-independence of the accretion rate from properties of the host galaxy, and a cosmic synchronization of accretion among black holes of a common mass. We consider how these puzzles might change if virial mass estimation turns out to have a systematic bias. As examples, we consider two recent claims of mass-dependent biases in Mg II masses. Under any such correction, the surprising cosmic synchronization of quasar accretion rates and independence from themore » host galaxy remain. The slope and location of the SEB are very sensitive to biases in virial mass estimation, and various mass calibrations appear to favor different possible physical explanations for feedback between the central black hole and its environment. The alternative mass estimators considered do not simply remove puzzling quasar behavior, but rather replace it with new puzzles that may be more difficult to solve than those using current virial mass estimators and the Shen et al. catalog.« less

  19. Using Correlation to Compute Better Probability Estimates in Plan Graphs

    NASA Technical Reports Server (NTRS)

    Bryce, Daniel; Smith, David E.

    2006-01-01

    Plan graphs are commonly used in planning to help compute heuristic "distance" estimates between states and goals. A few authors have also attempted to use plan graphs in probabilistic planning to compute estimates of the probability that propositions can be achieved and actions can be performed. This is done by propagating probability information forward through the plan graph from the initial conditions through each possible action to the action effects, and hence to the propositions at the next layer of the plan graph. The problem with these calculations is that they make very strong independence assumptions - in particular, they usually assume that the preconditions for each action are independent of each other. This can lead to gross overestimates in probability when the plans for those preconditions interfere with each other. It can also lead to gross underestimates of probability when there is synergy between the plans for two or more preconditions. In this paper we introduce a notion of the binary correlation between two propositions and actions within a plan graph, show how to propagate this information within a plan graph, and show how this improves probability estimates for planning. This notion of correlation can be thought of as a continuous generalization of the notion of mutual exclusion (mutex) often used in plan graphs. At one extreme (correlation=0) two propositions or actions are completely mutex. With correlation = 1, two propositions or actions are independent, and with correlation > 1, two propositions or actions are synergistic. Intermediate values can and do occur indicating different degrees to which propositions and action interfere or are synergistic. We compare this approach with another recent approach by Bryce that computes probability estimates using Monte Carlo simulation of possible worlds in plan graphs.

  20. Forage quantity estimation from MERIS using band depth parameters

    NASA Astrophysics Data System (ADS)

    Ullah, Saleem; Yali, Si; Schlerf, Martin

    Saleem Ullah1 , Si Yali1 , Martin Schlerf1 Forage quantity is an important factor influencing feeding pattern and distribution of wildlife. The main objective of this study was to evaluate the predictive performance of vegetation indices and band depth analysis parameters for estimation of green biomass using MERIS data. Green biomass was best predicted by NBDI (normalized band depth index) and yielded a calibration R2 of 0.73 and an accuracy (independent validation dataset, n=30) of 136.2 g/m2 (47 % of the measured mean) compared to a much lower accuracy obtained by soil adjusted vegetation index SAVI (444.6 g/m2, 154 % of the mean) and by other vegetation indices. This study will contribute to map and monitor foliar biomass over the year at regional scale which intern can aid the understanding of bird migration pattern. Keywords: Biomass, Nitrogen density, Nitrogen concentration, Vegetation indices, Band depth analysis parameters 1 Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, The Netherlands