Sample records for yield significant errors

  1. Statistical rice yield modeling using blended MODIS-Landsat based crop phenology metrics in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C. R.; Chen, C. F.; Nguyen, S. T.; Lau, K. V.

    2015-12-01

    Taiwan is a populated island with a majority of residents settled in the western plains where soils are suitable for rice cultivation. Rice is not only the most important commodity, but also plays a critical role for agricultural and food marketing. Information of rice production is thus important for policymakers to devise timely plans for ensuring sustainably socioeconomic development. Because rice fields in Taiwan are generally small and yet crop monitoring requires information of crop phenology associating with the spatiotemporal resolution of satellite data, this study used Landsat-MODIS fusion data for rice yield modeling in Taiwan. We processed the data for the first crop (Feb-Mar to Jun-Jul) and the second (Aug-Sep to Nov-Dec) in 2014 through five main steps: (1) data pre-processing to account for geometric and radiometric errors of Landsat data, (2) Landsat-MODIS data fusion using using the spatial-temporal adaptive reflectance fusion model, (3) construction of the smooth time-series enhanced vegetation index 2 (EVI2), (4) rice yield modeling using EVI2-based crop phenology metrics, and (5) error verification. The fusion results by a comparison bewteen EVI2 derived from the fusion image and that from the reference Landsat image indicated close agreement between the two datasets (R2 > 0.8). We analysed smooth EVI2 curves to extract phenology metrics or phenological variables for establishment of rice yield models. The results indicated that the established yield models significantly explained more than 70% variability in the data (p-value < 0.001). The comparison results between the estimated yields and the government's yield statistics for the first and second crops indicated a close significant relationship between the two datasets (R2 > 0.8), in both cases. The root mean square error (RMSE) and mean absolute error (MAE) used to measure the model accuracy revealed the consistency between the estimated yields and the government's yield statistics. This study demonstrates advantages of using EVI2-based phenology metrics (derived from Landsat-MODIS fusion data) for rice yield estimation in Taiwan prior to the harvest period.

  2. Statistical Analysis Experiment for Freshman Chemistry Lab.

    ERIC Educational Resources Information Center

    Salzsieder, John C.

    1995-01-01

    Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…

  3. Evaluating the utility of dynamical downscaling in agricultural impacts projections

    PubMed Central

    Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J.

    2014-01-01

    Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections. PMID:24872455

  4. Apparent polyploidization after gamma irradiation: pitfalls in the use of quantitative polymerase chain reaction (qPCR) for the estimation of mitochondrial and nuclear DNA gene copy numbers.

    PubMed

    Kam, Winnie W Y; Lake, Vanessa; Banos, Connie; Davies, Justin; Banati, Richard

    2013-05-30

    Quantitative polymerase chain reaction (qPCR) has been widely used to quantify changes in gene copy numbers after radiation exposure. Here, we show that gamma irradiation ranging from 10 to 100 Gy of cells and cell-free DNA samples significantly affects the measured qPCR yield, due to radiation-induced fragmentation of the DNA template and, therefore, introduces errors into the estimation of gene copy numbers. The radiation-induced DNA fragmentation and, thus, measured qPCR yield varies with temperature not only in living cells, but also in isolated DNA irradiated under cell-free conditions. In summary, the variability in measured qPCR yield from irradiated samples introduces a significant error into the estimation of both mitochondrial and nuclear gene copy numbers and may give spurious evidence for polyploidization.

  5. Network Adjustment of Orbit Errors in SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Bahr, Hermann; Hanssen, Ramon

    2010-03-01

    Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.

  6. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations

    PubMed Central

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028

  7. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    PubMed

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  8. Impact of SST Anomaly Events over the Kuroshio-Oyashio Extension on the "Summer Prediction Barrier"

    NASA Astrophysics Data System (ADS)

    Wu, Yujie; Duan, Wansuo

    2018-04-01

    The "summer prediction barrier" (SPB) of SST anomalies (SSTA) over the Kuroshio-Oyashio Extension (KOE) refers to the phenomenon that prediction errors of KOE-SSTA tend to increase rapidly during boreal summer, resulting in large prediction uncertainties. The fast error growth associated with the SPB occurs in the mature-to-decaying transition phase, which is usually during the August-September-October (ASO) season, of the KOE-SSTA events to be predicted. Thus, the role of KOE-SSTA evolutionary characteristics in the transition phase in inducing the SPB is explored by performing perfect model predictability experiments in a coupled model, indicating that the SSTA events with larger mature-to-decaying transition rates (Category-1) favor a greater possibility of yielding a more significant SPB than those events with smaller transition rates (Category-2). The KOE-SSTA events in Category-1 tend to have more significant anomalous Ekman pumping in their transition phase, resulting in larger prediction errors of vertical oceanic temperature advection associated with the SSTA events. Consequently, Category-1 events possess faster error growth and larger prediction errors. In addition, the anomalous Ekman upwelling (downwelling) in the ASO season also causes SSTA cooling (warming), accelerating the transition rates of warm (cold) KOE-SSTA events. Therefore, the SSTA transition rate and error growth rate are both related with the anomalous Ekman pumping of the SSTA events to be predicted in their transition phase. This may explain why the SSTA events transferring more rapidly from the mature to decaying phase tend to have a greater possibility of yielding a more significant SPB.

  9. Post-wildfire recovery of water yield in the Sydney Basin water supply catchments: An assessment of the 2001/2002 wildfires

    NASA Astrophysics Data System (ADS)

    Heath, J. T.; Chafer, C. J.; van Ogtrop, F. F.; Bishop, T. F. A.

    2014-11-01

    Wildfire is a recurring event which has been acknowledged by the literature to impact the hydrological cycle of a catchment. Hence, wildfire may have a significant impact on water yield levels within a catchment. In Australia, studies of the effect of fire on water yield have been limited to obligate seeder vegetation communities. These communities regenerate from seed banks in the ground or within woody fruits and are generally activated by fire. In contrast, the Sydney Basin is dominated by obligate resprouter communities. These communities regenerate from fire resistant buds found on the plant and are generally found in regions where wildfire is a regular occurrence. The 2001/2002 wildfires in the Sydney Basin provided an opportunity to investigate the impacts of wildfire on water yield in a number of catchments dominated by obligate resprouting communities. The overall aim of this study was to investigate whether there was a difference in water yield post-wildfire. Four burnt subcatchments and 3 control subcatchments were assessed. A general additive model was calibrated using pre-wildfire data and then used to predict post-wildfire water yield using post-wildfire data. The model errors were analysed and it was found that the errors for all subcatchments showed similar trends for the post-wildfire period. This finding demonstrates that wildfires within the Sydney Basin have no significant medium-term impact on water yield.

  10. Observation of B{sup +}{yields}{xi}{sub c}{sup 0}{lambda}{sub c}{sup +} and evidence for B{sup 0}{yields}{xi}{sub c}{sup -}{lambda}{sub c}{sup +}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chistov, R.; Aushev, T.; Balagura, V.

    We report the first observation of the decay B{sup +}{yields}{xi}{sub c}{sup 0}{lambda}{sub c}{sup +} with a significance of 8.7{sigma} and evidence for the decay B{sup 0}{yields}{xi}{sub c}{sup -}{lambda}{sub c}{sup +} with a significance of 3.8{sigma}. The product B(B{sup +}{yields}{xi}{sub c}{sup 0}{lambda}{sub c}{sup +})xB({xi}{sub c}{sup 0}{yields}{xi}{sup +}{pi}{sup -}) is measured to be (4.8{sub -0.9}{sup +1.0}{+-}1.1{+-}1.2)x10{sup -5}, and B(B{sup 0}{yields}{xi}{sub c}{sup -}{lambda}{sub c}{sup +})xB({xi}{sub c}{sup -}{yields}{xi}{sup +}{pi}{sup -}{pi}{sup -}) is measured to be (9.3{sub -2.8}{sup +3.7}{+-}1.9{+-}2.4)x10{sup -5}. The errors are statistical, systematic and the error of the {lambda}{sub c}{sup +}{yields}pK{sup -}{pi}{sup +} branching fraction, respectively. The decay B{sup +}{yields}{xi}{sub c}{sup 0}{lambda}{sub c}{supmore » +} is the first example of a two-body exclusive B{sup +} decay into two charmed baryons. The data used for this analysis was accumulated at the {upsilon}(4S) resonance, using the Belle detector at the e{sup +}e{sup -} asymmetric-energy collider KEKB. The integrated luminosity of the data sample is equal to 357 fb{sup -1}, corresponding to 386x10{sup 6} BB pairs.« less

  11. Remotely sensed rice yield prediction using multi-temporal NDVI data derived from NOAA's-AVHRR.

    PubMed

    Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun

    2013-01-01

    Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha(-1). Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly.

  12. Remotely Sensed Rice Yield Prediction Using Multi-Temporal NDVI Data Derived from NOAA's-AVHRR

    PubMed Central

    Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun

    2013-01-01

    Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha−1. Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly. PMID:23967112

  13. Infant search and object permanence: a meta-analysis of the A-not-B error.

    PubMed

    Wellman, H M; Cross, D; Bartsch, K

    1987-01-01

    Research on Piaget's stage 4 object concept has failed to reveal a clear or consistent pattern of results. Piaget found that 8-12-month-old infants would make perserverative errors; his explanation for this phenomenon was that the infant's concept of the object was contextually dependent on his or her actions. Some studies designed to test Piaget's explanation have replicated Piaget's basic finding, yet many have found no preference for the A location or the B location or an actual preference for the B location. More recently, researchers have attempted to uncover the causes for these results concerning the A-not-B error. Again, however, different studies have yielded different results, and qualitative reviews have failed to yield a consistent explanation for the results of the individual studies. This state of affairs suggests that the phenomenon may simply be too complex to be captured by individual studies varying 1 factor at a time and by reviews based on similar qualitative considerations. Therefore, the current investigation undertook a meta-analysis, a synthesis capturing the quantitative information across the now sizable number of studies. We entered several important factors into the meta-analysis, including the effects of age, the number of A trials, the length of delay between hiding and search, the number of locations, the distances between locations, and the distinctive visual properties of the hiding arrays. Of these, the analysis consistently indicated that age, delay, and number of hiding locations strongly influence infants' search. The pattern of specific findings also yielded new information about infant search. A general characterization of the results is that, at every age, both above-chance and below-chance performance was observed. That is, at each age at least 1 combination of delay and number of locations yielded above-chance A-not-B errors or significant perseverative search. At the same time, at each age at least 1 alternative combination of delay and number of locations yielded below-chance errors and significant above-chance correct performance, that is, significantly accurate search. These 2 findings, appropriately elaborated, allow us to evaluate all extant theories of stage 4 infant search. When this is done, all these extant accounts prove to be incorrect. That is, they are incommensurate with one aspect or another of the pooled findings in the meta-analysis. Therefore, we end by proposing a new account that is consistent with the entire data set.

  14. Efficiency, error and yield in light-directed maskless synthesis of DNA microarrays

    PubMed Central

    2011-01-01

    Background Light-directed in situ synthesis of DNA microarrays using computer-controlled projection from a digital micromirror device--maskless array synthesis (MAS)--has proved to be successful at both commercial and laboratory scales. The chemical synthetic cycle in MAS is quite similar to that of conventional solid-phase synthesis of oligonucleotides, but the complexity of microarrays and unique synthesis kinetics on the glass substrate require a careful tuning of parameters and unique modifications to the synthesis cycle to obtain optimal deprotection and phosphoramidite coupling. In addition, unintended deprotection due to scattering and diffraction introduce insertion errors that contribute significantly to the overall error rate. Results Stepwise phosphoramidite coupling yields have been greatly improved and are now comparable to those obtained in solid phase synthesis of oligonucleotides. Extended chemical exposure in the synthesis of complex, long oligonucleotide arrays result in lower--but still high--final average yields which approach 99%. The new synthesis chemistry includes elimination of the standard oxidation until the final step, and improved coupling and light deprotection. Coupling Insertions due to stray light are the limiting factor in sequence quality for oligonucleotide synthesis for gene assembly. Diffraction and local flare are by far the largest contributors to loss of optical contrast. Conclusions Maskless array synthesis is an efficient and versatile method for synthesizing high density arrays of long oligonucleotides for hybridization- and other molecular binding-based experiments. For applications requiring high sequence purity, such as gene assembly, diffraction and flare remain significant obstacles, but can be significantly reduced with straightforward experimental strategies. PMID:22152062

  15. Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22

    DTIC Science & Technology

    the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.

  16. Analysis of D0 -> K anti-K X Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessop, Colin P.

    2003-06-06

    Using data taken with the CLEO II detector, they have studied the decays of the D{sup 0} to K{sup +}K{sup -}, K{sup 0}{bar K}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}, K{sup +}K{sup -}{pi}{sup 0}. The authors present significantly improved results for B(D{sup 0} {yields} K{sup +}K{sup -}) = (0.454 {+-} 0.028 {+-} 0.035)%, B(D{sup 0} {yields} K{sup 0}{bar K}{sup 0}) = (0.054 {+-} 0.012 {+-} 0.010)% and B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}K{sub S}{sup 0}) = (0.074 {+-} 0.010 {+-} 0.015)% where the first errors are statistical and the second errors aremore » the estimate of their systematic uncertainty. They also present a new upper limit B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}) < 0.059% at the 90% confidence level and the first measurement of B(D{sup 0} {yields} K{sup +}K{sup -}{pi}{sup 0}) = (0.14 {+-} 0.04)%.« less

  17. MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis

    NASA Technical Reports Server (NTRS)

    McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.

    2010-01-01

    Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.

  18. Error in the Honeybee Waggle Dance Improves Foraging Flexibility

    PubMed Central

    Okada, Ryuichi; Ikeno, Hidetoshi; Kimura, Toshifumi; Ohashi, Mizue; Aonuma, Hitoshi; Ito, Etsuro

    2014-01-01

    The honeybee waggle dance communicates the location of profitable food sources, usually with a certain degree of error in the directional information ranging from 10–15° at the lower margin. We simulated one-day colonial foraging to address the biological significance of information error in the waggle dance. When the error was 30° or larger, the waggle dance was not beneficial. If the error was 15°, the waggle dance was beneficial when the food sources were scarce. When the error was 10° or smaller, the waggle dance was beneficial under all the conditions tested. Our simulation also showed that precise information (0–5° error) yielded great success in finding feeders, but also caused failures at finding new feeders, i.e., a high-risk high-return strategy. The observation that actual bees perform the waggle dance with an error of 10–15° might reflect, at least in part, the maintenance of a successful yet risky foraging trade-off. PMID:24569525

  19. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  20. The effect of the Earth's oblate spheroid shape on the accuracy of a time-of-arrival lightning ground strike locating system

    NASA Technical Reports Server (NTRS)

    Casper, Paul W.; Bent, Rodney B.

    1991-01-01

    The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.

  1. Computing the biomass potentials for maize and two alternative energy crops, triticale and cup plant (Silphium perfoliatum L.), with the crop model BioSTAR in the region of Hannover (Germany).

    PubMed

    Bauböck, Roland; Karpenstein-Machan, Marianne; Kappas, Martin

    2014-01-01

    Lower Saxony (Germany) has the highest installed electric capacity from biogas in Germany. Most of this electricity is generated with maize. Reasons for this are the high yields and the economic incentive. In parts of Lower Saxony, an expansion of maize cultivation has led to ecological problems and a negative image of bioenergy as such. Winter triticale and cup plant have both shown their suitability as alternative energy crops for biogas production and could help to reduce maize cultivation. The model Biomass Simulation Tool for Agricultural Resources (BioSTAR) has been validated with observed yield data from the region of Hannover for the cultures maize and winter wheat. Predicted yields for the cultures show satisfactory error values of 9.36% (maize) and 11.5% (winter wheat). Correlations with observed data are significant ( P  < 0.01) with R  = 0.75 for maize and 0.6 for winter wheat. Biomass potential calculations for triticale and cup plant have shown both crops to be high yielding and a promising alternative to maize in the region of Hanover and other places in Lower Saxony. The model BioSTAR simulated yields for maize and winter wheat in the region of Hannover at a good overall level of accuracy (combined error 10.4%). Due to input data aggregation, individual years show high errors though (up to 30%). Nevertheless, the BioSTAR crop model has proven to be a functioning tool for the prediction of agricultural biomass potentials under varying environmental and crop management frame conditions.

  2. Does a better model yield a better argument? An info-gap analysis

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2017-04-01

    Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.

  3. Developmental change in cognitive organization underlying stroop tasks of Japanese orthographies.

    PubMed

    Toma, C; Toshima, T

    1989-01-01

    Cognitive processes underlying Stroop interference tasks of two Japanese orthographies, hiragana (a phonetic orthography) and kanji (a logographic orthography) were studied from the developmental point of view. Four age groups (first, second, third graders, and university students) were employed as subjects. Significant interference was yielded both in the hiragana and in the kanji version. Performance time on interference task decreased with age. For elementary school children, the error frequency on the interference task was higher than that on the task of naming patch colors or on the task of reading words printed in black ink, but the error frequencies did not differ among tasks for university students. In the interference task more word reading errors were yielded in the kanji version than in the hiragana version during and after third grade. The findings suggested that (1) the recognition system of hiragana and of kanji becomes qualitatively different during and after third grade, (2) the integrative system, which organizes cognitive processes underlying Stroop task, develops with age, and (3) efficiency of the organization increases with age.

  4. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less

  5. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  7. EPE fundamentals and impact of EUV: Will traditional design-rule calculations work in the era of EUV?

    NASA Astrophysics Data System (ADS)

    Gabor, Allen H.; Brendler, Andrew C.; Brunner, Timothy A.; Chen, Xuemei; Culp, James A.; Levinson, Harry J.

    2018-03-01

    The relationship between edge placement error, semiconductor design-rule determination and predicted yield in the era of EUV lithography is examined. This paper starts with the basics of edge placement error and then builds up to design-rule calculations. We show that edge placement error (EPE) definitions can be used as the building blocks for design-rule equations but that in the last several years the term "EPE" has been used in the literature to refer to many patterning errors that are not EPE. We then explore the concept of "Good Fields"1 and use it predict the n-sigma value needed for design-rule determination. Specifically, fundamental yield calculations based on the failure opportunities per chip are used to determine at what n-sigma "value" design-rules need to be tested to ensure high yield. The "value" can be a space between two features, an intersect area between two features, a minimum area of a feature, etc. It is shown that across chip variation of design-rule important values needs to be tested at sigma values between seven and eight which is much higher than the four-sigma values traditionally used for design-rule determination. After recommending new statistics be used for design-rule calculations the paper examines the impact of EUV lithography on sources of variation important for design-rule calculations. We show that stochastics can be treated as an effective dose variation that is fully sampled across every chip. Combining the increased within chip variation from EUV with the understanding that across chip variation of design-rule important values needs to not cause a yield loss at significantly higher sigma values than have traditionally been looked at, the conclusion is reached that across-wafer, wafer-to-wafer and lot-to-lot variation will have to overscale for any technology introducing EUV lithography where stochastic noise is a significant fraction of the effective dose variation. We will emphasize stochastic effects on edge placement error distributions and appropriate design-rule setting. While CD distributions with long tails coming from stochastic effects do bring increased risk of failure (especially on chips that may have over a billion failure opportunities per layer) there are other sources of variation that have sharp cutoffs, i.e. have no tails. We will review these sources and show how distributions with different skew and kurtosis values combine.

  8. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  9. Efficient Bit-to-Symbol Likelihood Mappings

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  10. Commercially sterilized mussel meats (Mytilus chilensis): a study on process yield.

    PubMed

    Almonacid, S; Bustamante, J; Simpson, R; Urtubia, A; Pinto, M; Teixeira, A

    2012-06-01

    The processing steps most responsible for yield loss in the manufacture of canned mussel meats are the thermal treatments of precooking to remove meats from shells, and thermal processing (retorting) to render the final canned product commercially sterile for long-term shelf stability. The objective of this study was to investigate and evaluate the impact of different combinations of process variables on the ultimate drained weight in the final mussel product (Mytilu chilensis), while verifying that any differences found were statistically and economically significant. The process variables selected for this study were precooking time, brine salt concentration, and retort temperature. Results indicated 2 combinations of process variables producing the widest difference in final drained weight, designated best combination and worst combination with 35% and 29% yield, respectively. Significance of this difference was determined by employing a Bootstrap methodology, which assumes an empirical distribution of statistical error. A difference of nearly 6 percentage points in total yield was found. This represents a 20% increase in annual sales from the same quantity of raw material, in addition to increase in yield, the conditions for the best process included a retort process time 65% shorter than that for the worst process, this difference in yield could have significant economic impact, important to the mussel canning industry. © 2012 Institute of Food Technologists®

  11. Impact of Uncertainties in Exposure Assessment on Thyroid Cancer Risk among Persons in Belarus Exposed as Children or Adolescents Due to the Chernobyl Accident.

    PubMed

    Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir

    2015-01-01

    The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.

  12. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  13. Performance analysis of adaptive equalization for coherent acoustic communications in the time-varying ocean environment.

    PubMed

    Preisig, James C

    2005-07-01

    Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.

  14. Solutions to decrease a systematic error related to AAPH addition in the fluorescence-based ORAC assay.

    PubMed

    Mellado-Ortega, Elena; Zabalgogeazcoa, Iñigo; Vázquez de Aldana, Beatriz R; Arellano, Juan B

    2017-02-15

    Oxygen radical absorbance capacity (ORAC) assay in 96-well multi-detection plate readers is a rapid method to determine total antioxidant capacity (TAC) in biological samples. A disadvantage of this method is that the antioxidant inhibition reaction does not start in all of the 96 wells at the same time due to technical limitations when dispensing the free radical-generating azo initiator 2,2'-azobis (2-methyl-propanimidamide) dihydrochloride (AAPH). The time delay between wells yields a systematic error that causes statistically significant differences in TAC determination of antioxidant solutions depending on their plate position. We propose two alternative solutions to avoid this AAPH-dependent error in ORAC assays. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Coherent triplet excitation suppresses the heading error of the avian compass

    NASA Astrophysics Data System (ADS)

    Katsoprinakis, G. E.; Dellis, A. T.; Kominis, I. K.

    2010-08-01

    Radical-ion pair reactions are currently understood to underlie the biochemical magnetic compass of migratory birds. It was recently shown that radical-ion pair reactions form a rich playground for the application of quantum-information-science concepts and effects. We will show here that the intricate interplay between the quantum Zeno effect and the coherent excitation of radical-ion pairs leads to an exquisite angular sensitivity of the reaction yields. This results in a significant and previously unanticipated suppression of the avian compass heading error, opening the way to quantum engineering precision biological sensors.

  16. Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.

    PubMed

    Shi, Lu-Feng

    2017-04-01

    Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly fewer errors with /e/ and more errors with word-final /p, k/. Data reported in the present study lead to a twofold conclusion. On the one hand, normal-hearing heritage speakers of Spanish may misidentify English phonemes in patterns different from those of English monolingual listeners. Not all phoneme errors can be readily understood by comparing Spanish and English phonology, suggesting that Spanish heritage speakers differ in performance from other Spanish-English bilingual listeners. On the other hand, the absolute number of errors and the error pattern of most phonemes were comparable between English monolingual listeners and Spanish heritage speakers, suggesting that audiologists may assess word recognition in quiet in the same way for these two groups of listeners, if diagnosis is based on words, not phonemes. American Academy of Audiology

  17. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  18. Emotion perception, non-social cognition and symptoms as predictors of theory of mind in schizophrenia.

    PubMed

    Vaskinn, Anja; Andersson, Stein; Østefjells, Tiril; Andreassen, Ole A; Sundet, Kjetil

    2018-06-05

    Theory of mind (ToM) can be divided into cognitive and affective ToM, and a distinction can be made between overmentalizing and undermentalizing errors. Research has shown that ToM in schizophrenia is associated with non-social and social cognition, and with clinical symptoms. In this study, we investigate cognitive and clinical predictors of different ToM processes. Ninety-one individuals with schizophrenia participated. ToM was measured with the Movie for the Assessment of Social Cognition (MASC) yielding six scores (total ToM, cognitive ToM, affective ToM, overmentalizing errors, undermentalizing errors and no mentalizing errors). Neurocognition was indexed by a composite score based on the non-social cognitive tests in the MATRICS Consensus Cognitive Battery (MCCB). Emotion perception was measured with Emotion in Biological Motion (EmoBio), a point-light walker task. Clinical symptoms were assessed with the Positive and Negative Syndrome Scale (PANSS). Seventy-one healthy control (HC) participants completed the MASC. Individuals with schizophrenia showed large impairments compared to HC for all MASC scores, except overmentalizing errors. Hierarchical regression analyses with the six different MASC scores as dependent variables revealed that MCCB was a significant predictor of all MASC scores, explaining 8-18% of the variance. EmoBio increased the explained variance significantly, to 17-28%, except for overmentalizing errors. PANSS excited symptoms increased explained variance for total ToM, affective ToM and no mentalizing errors. Both social and non-social cognition were significant predictors of ToM. Overmentalizing was only predicted by non-social cognition. Excited symptoms contributed to overall and affective ToM, and to no mentalizing errors. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Discrimination of Aspergillus isolates at the species and strain level by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry fingerprinting.

    PubMed

    Hettick, Justin M; Green, Brett J; Buskirk, Amanda D; Kashon, Michael L; Slaven, James E; Janotka, Erika; Blachere, Francoise M; Schmechel, Detlef; Beezhold, Donald H

    2008-09-15

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) was used to generate highly reproducible mass spectral fingerprints for 12 species of fungi of the genus Aspergillus and 5 different strains of Aspergillus flavus. Prior to MALDI-TOF MS analysis, the fungi were subjected to three 1-min bead beating cycles in an acetonitrile/trifluoroacetic acid solvent. The mass spectra contain abundant peaks in the range of 5 to 20kDa and may be used to discriminate between species unambiguously. A discriminant analysis using all peaks from the MALDI-TOF MS data yielded error rates for classification of 0 and 18.75% for resubstitution and cross-validation methods, respectively. If a subset of 28 significant peaks is chosen, resubstitution and cross-validation error rates are 0%. Discriminant analysis of the MALDI-TOF MS data for 5 strains of A. flavus using all peaks yielded error rates for classification of 0 and 5% for resubstitution and cross-validation methods, respectively. These data indicate that MALDI-TOF MS data may be used for unambiguous identification of members of the genus Aspergillus at both the species and strain levels.

  20. Obligation towards medical errors disclosure at a tertiary care hospital in Dubai, UAE

    PubMed Central

    Zaghloul, Ashraf Ahmad; Rahman, Syed Azizur; Abou El-Enein, Nagwa Younes

    2016-01-01

    OBJECTIVE: The study aimed to identify healthcare providers’ obligation towards medical errors disclosure as well as to study the association between the severity of the medical error and the intention to disclose the error to the patients and their families. DESIGN: A cross-sectional study design was followed to identify the magnitude of disclosure among healthcare providers in different departments at a randomly selected tertiary care hospital in Dubai. SETTING AND PARTICIPANTS: The total sample size accounted for 106 respondents. Data were collected using a questionnaire composed of two sections namely; demographic variables of the respondents and a section which included variables relevant to medical error disclosure. RESULTS: Statistical analysis yielded significant association between the obligation to disclose medical errors with male healthcare providers (X2 = 5.1), and being a physician (X2 = 19.3). Obligation towards medical errors disclosure was significantly associated with those healthcare providers who had not committed any medical errors during the past year (X2 = 9.8), and any type of medical error regardless the cause, extent of harm (X2 = 8.7). Variables included in the binary logistic regression model were; status (Exp β (Physician) = 0.39, 95% CI 0.16–0.97), gender (Exp β (Male) = 4.81, 95% CI 1.84–12.54), and medical errors during the last year (Exp β (None) = 2.11, 95% CI 0.6–2.3). CONCLUSION: Education and training of physicians about disclosure conversations needs to start as early as medical school. Like the training in other competencies required of physicians, education in communicating about medical errors could help reduce physicians’ apprehension and make them more comfortable with disclosure conversations. PMID:27567766

  1. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  2. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines

    PubMed Central

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.

    2016-01-01

    Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915

  3. Mitigating Photon Jitter in Optical PPM Communication

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2008-01-01

    A theoretical analysis of photon-arrival jitter in an optical pulse-position-modulation (PPM) communication channel has been performed, and now constitutes the basis of a methodology for designing receivers to compensate so that errors attributable to photon-arrival jitter would be minimized or nearly minimized. Photon-arrival jitter is an uncertainty in the estimated time of arrival of a photon relative to the boundaries of a PPM time slot. Photon-arrival jitter is attributable to two main causes: (1) receiver synchronization error [error in the receiver operation of partitioning time into PPM slots] and (2) random delay between the time of arrival of a photon at a detector and the generation, by the detector circuitry, of a pulse in response to the photon. For channels with sufficiently long time slots, photon-arrival jitter is negligible. However, as durations of PPM time slots are reduced in efforts to increase throughputs of optical PPM communication channels, photon-arrival jitter becomes a significant source of error, leading to significant degradation of performance if not taken into account in design. For the purpose of the analysis, a receiver was assumed to operate in a photon- starved regime, in which photon counts follow a Poisson distribution. The analysis included derivation of exact equations for symbol likelihoods in the presence of photon-arrival jitter. These equations describe what is well known in the art as a matched filter for a channel containing Gaussian noise. These equations would yield an optimum receiver if they could be implemented in practice. Because the exact equations may be too complex to implement in practice, approximations that would yield suboptimal receivers were also derived.

  4. Optimization of processing parameters of UAV integral structural components based on yield response

    NASA Astrophysics Data System (ADS)

    Chen, Yunsheng

    2018-05-01

    In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.

  5. Target motion tracking in MRI-guided transrectal robotic prostate biopsy.

    PubMed

    Tadayyon, Hadi; Lasso, Andras; Kaushal, Aradhana; Guion, Peter; Fichtinger, Gabor

    2011-11-01

    MRI-guided prostate needle biopsy requires compensation for organ motion between target planning and needle placement. Two questions are studied and answered in this paper: 1) is rigid registration sufficient in tracking the targets with an error smaller than the clinically significant size of prostate cancer and 2) what is the effect of the number of intraoperative slices on registration accuracy and speed? we propose multislice-to-volume registration algorithms for tracking the biopsy targets within the prostate. Three orthogonal plus additional transverse intraoperative slices are acquired in the approximate center of the prostate and registered with a high-resolution target planning volume. Both rigid and deformable scenarios were implemented. Both simulated and clinical MRI-guided robotic prostate biopsy data were used to assess tracking accuracy. average registration errors in clinical patient data were 2.6 mm for the rigid algorithm and 2.1 mm for the deformable algorithm. rigid tracking appears to be promising. Three tracking slices yield significantly high registration speed with an affordable error.

  6. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  7. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  8. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  9. On how to avoid input and structural uncertainties corrupt the inference of hydrological parameters using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Hernández, Mario R.; Francés, Félix

    2015-04-01

    One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.

  10. Linking Field and Satellite Observations to Reveal Differences in Single vs. Double-Cropped Soybean Yields in Central Brazil

    NASA Astrophysics Data System (ADS)

    Jeffries, G. R.; Cohn, A.

    2016-12-01

    Soy-corn double cropping (DC) has been widely adopted in Central Brazil alongside single cropped (SC) soybean production. DC involves different cropping calendars, soy varieties, and may be associated with different crop yield patterns and volatility than SC. Study of the performance of the region's agriculture in a changing climate depends on tracking differences in the productivity of SC vs. DC, but has been limited by crop yield data that conflate the two systems. We predicted SC and DC yields across Central Brazil, drawing on field observations and remotely sensed data. We first modeled field yield estimates as a function of remotely sensed DC status and vegetation index (VI) metrics, and other management and biophysical factors. We then used the statistical model estimated to predict SC and DC soybean yields at each 500 m2 grid cell of Central Brazil for harvest years 2001 - 2015. The yield estimation model was constructed using 1) a repeated cross-sectional survey of soybean yields and management factors for years 2007-2015, 2) a custom agricultural land cover classification dataset which assimilates earlier datasets for the region, and 3) 500m 8-day MODIS image composites used to calculate the wide dynamic range vegetation index (WDRVI) and derivative metrics such as area under the curve for WDRVI values in critical crop development periods. A statistical yield estimation model which primarily entails WDRVI metrics, DC status, and spatial fixed effects was developed on a subset of the yield dataset. Model validation was conducted by predicting previously withheld yield records, and then assessing error and goodness-of-fit for predicted values with metrics including root mean squared error (RMSE), mean squared error (MSE), and R2. We found a statistical yield estimation model which incorporates WDRVI and DC status to be way to estimate crop yields over the region. Statistical properties of the resulting gridded yield dataset may be valuable for understanding linkages between crop yields, farm management factors, and climate.

  11. Determination of Earth orientation using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Freedman, A. P.

    1989-01-01

    Modern spacecraft tracking and navigation require highly accurate Earth-orientation parameters. For near-real-time applications, errors in these quantities and their extrapolated values are a significant error source. A globally distributed network of high-precision receivers observing the full Global Positioning System (GPS) configuration of 18 or more satellites may be an efficient and economical method for the rapid determination of short-term variations in Earth orientation. A covariance analysis using the JPL Orbit Analysis and Simulation Software (OASIS) was performed to evaluate the errors associated with GPS measurements of Earth orientation. These GPS measurements appear to be highly competitive with those from other techniques and can potentially yield frequent and reliable centimeter-level Earth-orientation information while simultaneously allowing the oversubscribed Deep Space Network (DSN) antennas to be used more for direct project support.

  12. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors.

    PubMed

    Waugh, C J; Rosenberg, M J; Zylstra, A B; Frenje, J A; Séguin, F H; Petrasso, R D; Glebov, V Yu; Sangster, T C; Stoeckl, C

    2015-05-01

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  13. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    DOE PAGES

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; ...

    2015-05-27

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less

  14. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less

  15. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waugh, C. J., E-mail: cjwaugh@mit.edu; Zylstra, A. B.; Frenje, J. A.

    2015-05-15

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less

  16. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  17. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

    NASA Technical Reports Server (NTRS)

    Borgia, Andrea; Spera, Frank J.

    1990-01-01

    This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

  18. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Memory and the Moses illusion: failures to detect contradictions with stored knowledge yield negative memorial consequences.

    PubMed

    Bottoms, Hayden C; Eslick, Andrea N; Marsh, Elizabeth J

    2010-08-01

    Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as "How many animals of each kind did Moses take on the Ark?" despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.

  20. Rapid Measurement and Correction of Phase Errors from B0 Eddy Currents: Impact on Image Quality for Non-Cartesian Imaging

    PubMed Central

    Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.

    2014-01-01

    Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532

  1. Broadband linearisation of high-efficiency power amplifiers

    NASA Technical Reports Server (NTRS)

    Kenington, Peter B.; Parsons, Kieran J.; Bennett, David W.

    1993-01-01

    A feedforward-based amplifier linearization technique is presented which is capable of yielding significant improvements in both linearity and power efficiency over conventional amplifier classes (e.g. class-A or class-AB). Theoretical and practical results are presented showing that class-C stages may be used for both the main and error amplifiers yielding practical efficiencies well in excess of 30 percent, with theoretical efficiencies of much greater than 40 percent being possible. The levels of linearity which may be achieved are required for most satellite systems, however if greater linearity is required, the technique may be used in addition to conventional pre-distortion techniques.

  2. Evaluation of the CEAS model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  3. Evidence for Direct CP Violation in the Measurement of the Cabbibo-Kobayashi-Maskawa Angle {gamma} with B{sup {+-}}{yields}D(*)K{sup (*){+-}} Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amo Sanchez, P. del; Lees, J. P.; Poireau, V.

    2010-09-17

    We report the measurement of the Cabibbo-Kobayashi-Maskawa CP-violating angle {gamma} through a Dalitz plot analysis of neutral D-meson decays to K{sub S}{sup 0}{pi}{sup +}{pi}{sup -} and K{sub S}{sup 0}K{sup +}K{sup -} produced in the processes B{sup {+-}}{yields}DK{sup {+-}}, B{sup {+-}}{yields}D*K{sup {+-}} with D*{yields}D{pi}{sup 0}, D{gamma}, and B{sup {+-}}{yields}DK*{sup {+-}} with K*{sup {+-}}{yields}K{sub S}{sup 0}{pi}{+-}, using 468 million BB pairs collected by the BABAR detector at the PEP-II asymmetric-energy e{sup +}e{sup -} collider at SLAC. We measure {gamma}=(68{+-}14{+-}4{+-}3) deg. (modulo 180 deg.), where the first error is statistical, the second is the experimental systematic uncertainty, and the third reflects the uncertaintymore » in the description of the neutral D decay amplitudes. This result is inconsistent with {gamma}=0 (no direct CP violation) with a significance of 3.5 standard deviations.« less

  4. Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.

    PubMed

    Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N

    2016-01-01

    Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.

  5. A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate.

    PubMed

    Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Thorburn, Peter J; Castellano, Michael J; Moore, Kenneth J; VanLoocke, Andrew; Heaton, Emily A; Archontoulis, Sotirios V

    2018-01-01

    Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time ( R 2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity ( R 2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined ( n = 31) with an average error range of ±38 kg N ha -1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost.

  6. A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate

    PubMed Central

    Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Thorburn, Peter J.; Castellano, Michael J.; Moore, Kenneth J.; VanLoocke, Andrew; Heaton, Emily A.; Archontoulis, Sotirios V.

    2018-01-01

    Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time (R2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity (R2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined (n = 31) with an average error range of ±38 kg N ha−1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost. PMID:29706974

  7. Sudden Possibilities: Porpoises, Eggcorns, and Error

    ERIC Educational Resources Information Center

    Crovitz, Darren

    2011-01-01

    This article discusses how amusing mistakes can make for serious language instruction. The notion that close analysis of language errors can yield insight into how one thinks and learns seems fundamentally obvious. Yet until relatively recently, language errors were primarily treated as indicators of learner deficiency rather than opportunities to…

  8. A provisional effective evaluation when errors are present in independent variables

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.

    1983-01-01

    Algorithms are examined for evaluating the parameters of a regression model when there are errors in the independent variables. The algorithms are fast and the estimates they yield are stable with respect to the correlation of errors and measurements of both the dependent variable and the independent variables.

  9. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  10. National Aeronautics and Space Administration "threat and error" model applied to pediatric cardiac surgery: error cycles precede ∼85% of patient deaths.

    PubMed

    Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S

    2015-02-01

    We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.

  11. A re-examination of paleomagnetic results from NA Jurassic sedimentary rocks: Additional evidence for proposed Jurassic MUTO?

    NASA Astrophysics Data System (ADS)

    Housen, B. A.

    2015-12-01

    Kent and Irving, 2010; and Kent et al, 2015 propose a monster shift in the position of Jurassic (160 to 145 Ma) paleopoles for North America- defined by results from igneous rocks. This monster shift is likely an unrecognized true polar wander occurrence. Although subject to inclination error, results from sedimentary rocks from North America, if corrected for these effects, can be used to supplement the available data for this time period. Steiner (2003) reported results from 48 stratigraphic horizons sampled from the Callovian Summerville Fm, from NE New Mexico. A recalculated mean of these results yields a mean direction of D = 332, I = 39, n=48, k = 15, α95 = 5.4°. These data were analyzed for possible inclination error-although the dataset is small, the E-I results yielded a corrected I = 53. This yields a corrected paleopole for NA at ~165 Ma located at 67° N and 168° E.Paleomagnetic results from the Black Hills- Kilanowski (2002) for the Callovian Hulett Mbr of the Sundance Fm, and Gregiore (2001) the Oxfordian-Tithonian Morrison Fm (Gregiore, 2001) have previously been interpreted to represent Eocene-aged remagnetizations- due to the nearly exact coincidence between the in-situ pole positions of these Jurassic units with the Eocene pole for NA. Both of the tilt-corrected results for these units have high latitude poles (Sundance Fm: 79° N, 146° E; Morrison Fm: 89° N, 165° E). An E-I analysis of these data will be presented- using a provisional inclination error of 10°, corrected paleopoles are: (Sundance Fm: 76° N, 220° E; Morrison Fm: 77° N, 266° E). The Black Hills 165 Ma (Sundance Fm) and 145 Ma (Morrison Fm) poles, provisionally corrected for 10° inclination error- occur fairly close to the NA APWP proposed by Kent et al, 2015- using an updated set of results from kimberlites- the agreement between the Sundance Fm and the Triple-B (158 Ma) pole would be nearly exact with a slightly lesser inclination error. The Summerville Fm- which is thought to be ~ coeval with the Sundance Fm- is significantly offset from this newer NA path, but a larger inclination error for this unit would produce a better agreement. Thus, pending more precise estimates of inclination error from these units, middle-late Jurassic sedimentary rocks from NA do support the existence of a MUTO (Monster Unknown True polar wander Occurrence) during Jurassic time.

  12. The buffer value of groundwater when well yield is limited

    NASA Astrophysics Data System (ADS)

    Foster, T.; Brozović, N.; Speir, C.

    2017-04-01

    A large proportion of the total value of groundwater in conjunctive use systems is associated with the ability to smooth out shortfalls in surface water supply during droughts. Previous research has argued that aquifer depletion in these regions will impact farmers negatively by reducing the available stock of groundwater to buffer production in future periods, and also by increasing the costs of groundwater extraction. However, existing studies have not considered how depletion may impact the productivity of groundwater stocks in conjunctive use systems through reductions in well yields. In this work, we develop a hydro-economic modeling framework to quantify the effects of changes in well yields on the buffer value of groundwater, and apply this model to an illustrative case study of tomato production in California's Central Valley. Our findings demonstrate that farmers with low well yields are forced to forgo significant production and profits because instantaneous groundwater supply is insufficient to buffer surface water shortfalls in drought years. Negative economic impacts of low well yields are an increasing function of surface water variability, and are also greatest for farmers operating less efficient irrigation systems. These results indicate that impacts of well yield reductions on the productivity of groundwater are an important economic impact of aquifer depletion, and that failure to consider this feedback may lead to significant errors in estimates of the value of groundwater management in conjunctive use systems.

  13. Improving precision of forage yield trials: A case study

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...

  14. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

  15. 7 CFR 275.23 - Determination of State agency program performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...

  16. Hand-movement-based in-vehicle driver/front-seat passenger discrimination for centre console controls

    NASA Astrophysics Data System (ADS)

    Herrmann, Enrico; Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Successful user discrimination in a vehicle environment may yield a reduction of the number of switches, thus significantly reducing costs while increasing user convenience. The personalization of individual controls permits conditional passenger enable/driver disable and vice versa options which may yield safety improvement. The authors propose a prototypic optical sensing system based on hand movement segmentation in near-infrared image sequences implemented in an Audi A6 Avant. Analyzing the number of movements in special regions, the system recognizes the direction of the forearm and hand motion and decides whether driver or front-seat passenger touch a control. The experimental evaluation is performed independently for uniformly and non-uniformly illuminated video data as well as for the complete video data set which includes both subsets. The general test results in error rates of up to 14.41% FPR / 16.82% FNR and 17.61% FPR / 14.77% FNR for driver and passenger respectively. Finally, the authors discuss the causes of the most frequently occurring errors as well as the prospects and limitations of optical sensing for user discrimination in passenger compartments.

  17. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Peter R., E-mail: pmarti46@uwo.ca; Cool, Derek W.; Romagnoli, Cesare

    2014-07-15

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiologymore » resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each tumor was consistently greater when using spherical tumor shapes as opposed to no shape assumption. However, an assumption of spherical tumor shape for RMSE = 3.5 mm led to a mean overestimation of tumor sampling probabilities of 3%, implying that assuming spherical tumor shape may be reasonable for many prostate tumors. The authors also determined that a biopsy system would need to have a RMS needle delivery error of no more than 1.6 mm in order to sample 95% of tumors with one core. The authors’ experiments also indicated that the effect of axial-direction error on the measured tumor burden was mitigated by the 18 mm core length at 3.5 mm RMSE. Conclusions: For biopsy systems with RMSE ≥ 3.5 mm, more than one biopsy core must be taken from the majority of tumors to achieveP ≥ 95%. These observations support the authors’ perspective that some tumors of clinically significant sizes may require more than one biopsy attempt in order to be sampled during the first biopsy session. This motivates the authors’ ongoing development of an approach to optimize biopsy plans with the aim of achieving a desired probability of obtaining a sample from each tumor, while minimizing the number of biopsies. Optimized planning of within-tumor targets for MRI-3D TRUS fusion biopsy could support earlier diagnosis of prostate cancer while it remains localized to the gland and curable.« less

  18. An improved error assessment for the GEM-T1 gravitational model

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.

    1988-01-01

    Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.

  19. Defining near misses: towards a sharpened definition based on empirical data about error handling processes.

    PubMed

    Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel

    2010-05-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Assessment of Grating Acuity in Infants and Toddlers Using an Electronic Acuity Card: The Dobson Card.

    PubMed

    Mohan, Kathleen M; Miller, Joseph M; Harvey, Erin M; Gerhart, Kimberly D; Apple, Howard P; Apple, Deborah; Smith, Jordana M; Davis, Amy L; Leonard-Green, Tina; Campus, Irene; Dennis, Leslie K

    2016-01-01

    To determine if testing binocular visual acuity in infants and toddlers using the Acuity Card Procedure (ACP) with electronic grating stimuli yields clinically useful data. Participants were infants and toddlers ages 5 to 36.7 months referred by pediatricians due to failed automated vision screening. The ACP was used to test binocular grating acuity. Stimuli were presented on the Dobson Card. The Dobson Card consists of a handheld matte-black plexiglass frame with two flush-mounted tablet computers and is similar in size and form to commercially available printed grating acuity testing stimuli (Teller Acuity Cards II [TACII]; Stereo Optical, Inc., Chicago, IL). On each trial, one tablet displayed a square-wave grating and the other displayed a luminance-matched uniform gray patch. Stimuli were roughly equivalent to the stimuli available in the printed TACII stimuli. After acuity testing, each child received a cycloplegic eye examination. Based on cycloplegic retinoscopy, patients were categorized as having high or low refractive error per American Association for Pediatric Ophthalmology and Strabismus vision screening referral criteria. Mean acuities for high and low refractive error groups were compared using analysis of covariance, controlling for age. Mean visual acuity was significantly poorer in children with high refractive error than in those with low refractive error (P = .015). Electronic stimuli presented using the ACP can yield clinically useful measurements of grating acuity in infants and toddlers. Further research is needed to determine the optimal conditions and procedures for obtaining accurate and clinically useful automated measurements of visual acuity in infants and toddlers. Copyright 2016, SLACK Incorporated.

  1. Three-dimensional quantitative structure-activity relationship studies on novel series of benzotriazine based compounds acting as Src inhibitors using CoMFA and CoMSIA.

    PubMed

    Gueto, Carlos; Ruiz, José L; Torres, Juan E; Méndez, Jefferson; Vivas-Reyes, Ricardo

    2008-03-01

    Comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on a series of benzotriazine derivatives, as Src inhibitors. Ligand molecular superimposition on the template structure was performed by database alignment method. The statistically significant model was established of 72 molecules, which were validated by a test set of six compounds. The CoMFA model yielded a q(2)=0.526, non cross-validated R(2) of 0.781, F value of 88.132, bootstrapped R(2) of 0.831, standard error of prediction=0.587, and standard error of estimate=0.351 while the CoMSIA model yielded the best predictive model with a q(2)=0.647, non cross-validated R(2) of 0.895, F value of 115.906, bootstrapped R(2) of 0.953, standard error of prediction=0.519, and standard error of estimate=0.178. The contour maps obtained from 3D-QSAR studies were appraised for activity trends for the molecules analyzed. Results indicate that small steric volumes in the hydrophobic region, electron-withdrawing groups next to the aryl linker region, and atoms close to the solvent accessible region increase the Src inhibitory activity of the compounds. In fact, adding substituents at positions 5, 6, and 8 of the benzotriazine nucleus were generated new compounds having a higher predicted activity. The data generated from the present study will further help to design novel, potent, and selective Src inhibitors as anticancer therapeutic agents.

  2. Field design factors affecting the precision of ryegrass forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...

  3. Topography-Dependent Motion Compensation: Application to UAVSAR Data

    NASA Technical Reports Server (NTRS)

    Jones, Cathleen E.; Hensley, Scott; Michel, Thierry

    2009-01-01

    The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.

  4. Estimation of Rice Crop Yields Using Random Forests in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C. F.; Lin, H. S.; Nguyen, S. T.; Chen, C. R.

    2017-12-01

    Rice is globally one of the most important food crops, directly feeding more people than any other crops. Rice is not only the most important commodity, but also plays a critical role in the economy of Taiwan because it provides employment and income for large rural populations. The rice harvested area and production are thus monitored yearly due to the government's initiatives. Agronomic planners need such information for more precise assessment of food production to tackle issues of national food security and policymaking. This study aimed to develop a machine-learning approach using physical parameters to estimate rice crop yields in Taiwan. We processed the data for 2014 cropping seasons, following three main steps: (1) data pre-processing to construct input layers, including soil types and weather parameters (e.g., maxima and minima air temperature, precipitation, and solar radiation) obtained from meteorological stations across the country; (2) crop yield estimation using the random forests owing to its merits as it can process thousands of variables, estimate missing data, maintain the accuracy level when a large proportion of the data is missing, overcome most of over-fitting problems, and run fast and efficiently when handling large datasets; and (3) error verification. To execute the model, we separated the datasets into two groups of pixels: group-1 (70% of pixels) for training the model and group-2 (30% of pixels) for testing the model. Once the model is trained to produce small and stable out-of-bag error (i.e., the mean squared error between predicted and actual values), it can be used for estimating rice yields of cropping seasons. The results obtained from the random forests-based regression were compared with the actual yield statistics indicated the values of root mean square error (RMSE) and mean absolute error (MAE) achieved for the first rice crop were respectively 6.2% and 2.7%, while those for the second rice crop were 5.3% and 2.9%, respectively. Although there are several uncertainties attributed to the data quality of input layers, our study demonstrates the promising application of random forests for estimating rice crop yields at the national level in Taiwan. This approach could be transferable to other regions of the world for improving large-scale estimation of rice crop yields.

  5. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  6. On the Discriminant Analysis in the 2-Populations Case

    NASA Astrophysics Data System (ADS)

    Rublík, František

    2008-01-01

    The empirical Bayes Gaussian rule, which in the normal case yields good values of the probability of total error, may yield high values of the maximum probability error. From this point of view the presented modified version of the classification rule of Broffitt, Randles and Hogg appears to be superior. The modification included in this paper is termed as a WR method, and the choice of its weights is discussed. The mentioned methods are also compared with the K nearest neighbours classification rule.

  7. Empirically Defined Patterns of Executive Function Deficits in Schizophrenia and Their Relation to Everyday Functioning: A Person-Centered Approach

    PubMed Central

    Iampietro, Mary; Giovannetti, Tania; Drabick, Deborah A. G.; Kessler, Rachel K.

    2013-01-01

    Executive function (EF) deficits in schizophrenia (SZ) are well documented, although much less is known about patterns of EF deficits and their association to differential impairments in everyday functioning. The present study empirically defined SZ groups based on measures of various EF abilities and then compared these EF groups on everyday action errors. Participants (n=45) completed various subtests from the Delis–Kaplan Executive Function System (D-KEFS) and the Naturalistic Action Test (NAT), a performance-based measure of everyday action that yields scores reflecting total errors and a range of different error types (e.g., omission, perseveration). Results of a latent class analysis revealed three distinct EF groups, characterized by (a) multiple EF deficits, (b) relatively spared EF, and (c) perseverative responding. Follow-up analyses revealed that the classes differed significantly on NAT total errors, total commission errors, and total perseveration errors; the two classes with EF impairment performed comparably on the NAT but performed worse than the class with relatively spared EF. In sum, people with SZ demonstrate variable patterns of EF deficits, and distinct aspects of these EF deficit patterns (i.e., poor mental control abilities) may be associated with everyday functioning capabilities. PMID:23035705

  8. Glaucoma and Driving: On-Road Driving Characteristics

    PubMed Central

    Wood, Joanne M.; Black, Alex A.; Mallon, Kerry; Thomas, Ravi; Owsley, Cynthia

    2016-01-01

    Purpose To comprehensively investigate the types of driving errors and locations that are most problematic for older drivers with glaucoma compared to those without glaucoma using a standardized on-road assessment. Methods Participants included 75 drivers with glaucoma (mean = 73.2±6.0 years) with mild to moderate field loss (better-eye MD = -1.21 dB; worse-eye MD = -7.75 dB) and 70 age-matched controls without glaucoma (mean = 72.6 ± 5.0 years). On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist using a standardized scoring system which assessed the types of driving errors and the locations where they were made and the number of critical errors that required an instructor intervention. Driving safety was rated on a 10-point scale. Self-reported driving ability and difficulties were recorded using the Driving Habits Questionnaire. Results Drivers with glaucoma were rated as significantly less safe, made more driving errors, and had almost double the rate of critical errors than those without glaucoma. Driving errors involved lane positioning and planning/approach, and were significantly more likely to occur at traffic lights and yield/give-way intersections. There were few between group differences in self-reported driving ability. Conclusions Older drivers with glaucoma with even mild to moderate field loss exhibit impairments in driving ability, particularly during complex driving situations that involve tactical problems with lane-position, planning ahead and observation. These results, together with the fact that these drivers self-report their driving to be relatively good, reinforce the need for evidence-based on-road assessments for evaluating driving fitness. PMID:27472221

  9. Glaucoma and Driving: On-Road Driving Characteristics.

    PubMed

    Wood, Joanne M; Black, Alex A; Mallon, Kerry; Thomas, Ravi; Owsley, Cynthia

    2016-01-01

    To comprehensively investigate the types of driving errors and locations that are most problematic for older drivers with glaucoma compared to those without glaucoma using a standardized on-road assessment. Participants included 75 drivers with glaucoma (mean = 73.2±6.0 years) with mild to moderate field loss (better-eye MD = -1.21 dB; worse-eye MD = -7.75 dB) and 70 age-matched controls without glaucoma (mean = 72.6 ± 5.0 years). On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist using a standardized scoring system which assessed the types of driving errors and the locations where they were made and the number of critical errors that required an instructor intervention. Driving safety was rated on a 10-point scale. Self-reported driving ability and difficulties were recorded using the Driving Habits Questionnaire. Drivers with glaucoma were rated as significantly less safe, made more driving errors, and had almost double the rate of critical errors than those without glaucoma. Driving errors involved lane positioning and planning/approach, and were significantly more likely to occur at traffic lights and yield/give-way intersections. There were few between group differences in self-reported driving ability. Older drivers with glaucoma with even mild to moderate field loss exhibit impairments in driving ability, particularly during complex driving situations that involve tactical problems with lane-position, planning ahead and observation. These results, together with the fact that these drivers self-report their driving to be relatively good, reinforce the need for evidence-based on-road assessments for evaluating driving fitness.

  10. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  11. Evidence for Direct CP Violation in the Measurement of the Cabibbo-Kobayashi-Maskawa Angle gamma with B-+ to D(*) K(*)-+ Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Amo Sanchez, P.; Lees, J.P.; Poireau, V.

    2011-08-19

    We report the measurement of the Cabibbo-Kobayashi-Maskawa CP-violating angle {gamma} through a Dalitz plot analysis of neutral D meson decays to K{sub S}{sup 0}{pi}{sup +}{pi}{sup -} and K{sub S}{sup 0} K{sup +}K{sup -} produced in the processes B{sup {-+}} {yields} DK{sup {-+}}, B{sup {-+}} {yields} D* K{sup {-+}} with D* {yields} D{pi}{sup 0}, D{gamma}, and B{sup {-+}} {yields} DK*{sup {-+}} with K*{sup {-+}} {yields} K{sub S}{sup 0}{pi}{sup {-+}}, using 468 million B{bar B} pairs collected by the BABAR detector at the PEP-II asymmetric-energy e{sup +}e{sup -} collider at SLAC. We measure {gamma} = (68 {+-} 14 {+-} 4 {+-} 3){supmore » o} (modulo 180{sup o}), where the first error is statistical, the second is the experimental systematic uncertainty and the third reflects the uncertainty in the description of the neutral D decay amplitudes. This result is inconsistent with {gamma} = 0 (no direct CP violation) with a significance of 3.5 standard deviations.« less

  12. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    NASA Astrophysics Data System (ADS)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  13. Study of B to pi l nu and B to rho l nu Decays and Determination of |V_ub|

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Amo Sanchez, P.; Lees, J.P.; Poireau, V.

    2011-12-09

    We present an analysis of exclusive charmless semileptonic B-meson decays based on 377 million B{bar B} pairs recorded with the BABAR detector at the {Upsilon} (4S) resonance. We select four event samples corresponding to the decay modes B{sup 0} {yields} {pi}{sup -}{ell}{sup +}{nu}, B{sup +} {yields} {pi}{sup 0}{ell}{sup +}{nu}, B{sup 0} {yields} {rho}{sup -}{ell}{sup +}{nu}, and B{sup +} {yields} {rho}{sup 0}{ell}{sup +}{nu}, and find the measured branching fractions to be consistent with isospin symmetry. Assuming isospin symmetry, we combine the two B {yields} {pi}{ell}{nu} samples, and similarly the two B {yields} {rho}{ell}{nu} samples, and measure the branching fractions {Beta}(B{sup 0}more » {yields} {pi}{sup -}{ell}{sup +}{nu}) = (1.41 {+-} 0.05 {+-} 0.07) x 10{sup -4} and {Beta}(B{sup 0} {yields} {rho}{sup 0}{ell}{sup +}{nu}) = (1.75 {+-} 0.15 {+-} 0.27) x 10{sup -4}, where the errors are statistical and systematic. We compare the measured distribution in q{sup 2}, the momentum transfer squared, with predictions for the form factors from QCD calculations and determine the CKM matrix element |V{sub ub}|. Based on the measured partial branching fraction for B {yields} {pi}{ell}{nu} in the range q{sup 2} < 12 GeV{sup 2} and the most recent LCSR calculations we obtain |V{sub ub}| = (3.78 {+-} 0.13{sub -0.40}{sup +0.55}) x 10{sup -3}, where the errors refer to the experimental and theoretical uncertainties. From a simultaneous fit to the data over the full q{sup 2} range and the FNAL/MILC lattice QCD results, we obtain |V{sub ub}| = (2.95 {+-} 0.31) x 10{sup -3} from B {yields} {pi}{ell}{nu}, where the error is the combined experimental and theoretical uncertainty.« less

  14. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    ERIC Educational Resources Information Center

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  15. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  16. Prevalence of refractive errors in Tibetan adolescents.

    PubMed

    Qian, Xuehan; Liu, Beihong; Wang, Jing; Wei, Nan; Qi, Xiaoli; Li, Xue; Li, Jing; Zhang, Ying; Hua, Ning; Ning, Yuxian; Ding, Gang; Ma, Xu; Wang, Binbin

    2018-05-11

    The prevalence of adolescent eye disease in remote areas of the Qinghai-Tibet Plateau has rarely been reported. To understand the prevalence of common eye diseases in Tibet, we performed ocular-disease screening on students from primary and secondary schools in Tibet, and compared the prevalence to that in the Central China Plain (referred to here as the "plains area"). The refractive status of students was evaluated with a Spot™ vision screener. The test was conducted three or fewer times for both eyes of each student and results with best correction were recorded. A total of 3246 students from primary and secondary schools in the Tibet Naidong district were screened, yielding a refractive error rate of 28.51%, which was significantly lower than that of the plains group (28.51% vs. 56.92%, p < 0.001). In both groups, the prevalence of refractive errors among females was higher than that among males. We found that Tibetan adolescents had a lower prevalence of refractive errors than did adolescents in the plains area, which may be related to less intensive schooling and greater exposure to sunlight.

  17. Climate Change and Its Impact on the Yield of Major Food Crops: Evidence from Pakistan

    PubMed Central

    Ali, Sajjad; Liu, Ying; Ishaq, Muhammad; Shah, Tariq; Abdullah; Ilyas, Aasir; Din, Izhar Ud

    2017-01-01

    Pakistan is vulnerable to climate change, and extreme climatic conditions are threatening food security. This study examines the effects of climate change (e.g., maximum temperature, minimum temperature, rainfall, relative humidity, and the sunshine) on the major crops of Pakistan (e.g., wheat, rice, maize, and sugarcane). The methods of feasible generalized least square (FGLS) and heteroscedasticity and autocorrelation (HAC) consistent standard error were employed using time series data for the period 1989 to 2015. The results of the study reveal that maximum temperature adversely affects wheat production, while the effect of minimum temperature is positive and significant for all crops. Rainfall effect towards the yield of a selected crop is negative, except for wheat. To cope with and mitigate the adverse effects of climate change, there is a need for the development of heat- and drought-resistant high-yielding varieties to ensure food security in the country. PMID:28538704

  18. Climate Change and Its Impact on the Yield of Major Food Crops: Evidence from Pakistan.

    PubMed

    Ali, Sajjad; Liu, Ying; Ishaq, Muhammad; Shah, Tariq; Abdullah; Ilyas, Aasir; Din, Izhar Ud

    2017-05-24

    Pakistan is vulnerable to climate change, and extreme climatic conditions are threatening food security. This study examines the effects of climate change (e.g., maximum temperature, minimum temperature, rainfall, relative humidity, and the sunshine) on the major crops of Pakistan (e.g., wheat, rice, maize, and sugarcane). The methods of feasible generalized least square (FGLS) and heteroscedasticity and autocorrelation (HAC) consistent standard error were employed using time series data for the period 1989 to 2015. The results of the study reveal that maximum temperature adversely affects wheat production, while the effect of minimum temperature is positive and significant for all crops. Rainfall effect towards the yield of a selected crop is negative, except for wheat. To cope with and mitigate the adverse effects of climate change, there is a need for the development of heat- and drought-resistant high-yielding varieties to ensure food security in the country.

  19. A reassessment of ground water flow conditions and specific yield at Borden and Cape Cod

    USGS Publications Warehouse

    Grimestad, Garry

    2002-01-01

    Recent widely accepted findings respecting the origin and nature of specific yield in unconfined aquifers rely heavily on water level changes observed during two pumping tests, one conducted at Borden, Ontario, Canada, and the other at Cape Cod, Massachusetts. The drawdown patterns observed during those tests have been taken as proof that unconfined specific yield estimates obtained from long-duration pumping tests should approach the laboratory-estimated effective porosity of representative aquifer formation samples. However, both of the original test reports included direct or referential descriptions of potential supplemental sources of pumped water that would have introduced intractable complications and errors into straightforward interpretations of the drawdown observations if actually present. Searches for evidence of previously neglected sources were performed by screening the original drawdown observations from both locations for signs of diagnostic skewing that should be present only if some of the extracted water was derived from sources other than main aquifer storage. The data screening was performed using error-guided computer assisted fitting techniques, capable of accurately sensing and simulating the effects of a wide range of non-traditional and external sources. The drawdown curves from both tests proved to be inconsistent with traditional single-source pumped aquifer models but consistent with site-specific alternatives that included significant contributions of water from external sources. The corrected pumping responses shared several important features. Unsaturated drainage appears to have ceased effectively at both locations within the first day of pumping, and estimates of specific yield stabilized at levels considerably smaller than the corresponding laboratory-measured or probable effective porosity. Separate sequential analyses of progressively later field observations gave stable and nearly constant specific yield estimates for each location, with no evidence from either test that more prolonged pumping would have induced substantially greater levels of unconfined specific yield.

  20. Attitude-error compensation for airborne down-looking synthetic-aperture imaging lidar

    NASA Astrophysics Data System (ADS)

    Li, Guang-yuan; Sun, Jian-feng; Zhou, Yu; Lu, Zhi-yong; Zhang, Guo; Cai, Guang-yu; Liu, Li-ren

    2017-11-01

    Target-coordinate transformation in the lidar spot of the down-looking synthetic-aperture imaging lidar (SAIL) was performed, and the attitude errors were deduced in the process of imaging, according to the principle of the airborne down-looking SAIL. The influence of the attitude errors on the imaging quality was analyzed theoretically. A compensation method for the attitude errors was proposed and theoretically verified. An airborne down-looking SAIL experiment was performed and yielded the same results. A point-by-point error-compensation method for solving the azimuthal-direction space-dependent attitude errors was also proposed.

  1. The associations of insomnia with costly workplace accidents and errors: results from the America Insomnia Survey.

    PubMed

    Shahly, Victoria; Berglund, Patricia A; Coulouvrat, Catherine; Fitzgerald, Timothy; Hajak, Goeran; Roth, Thomas; Shillington, Alicia C; Stephenson, Judith J; Walsh, James K; Kessler, Ronald C

    2012-10-01

    Insomnia is a common and seriously impairing condition that often goes unrecognized. To examine associations of broadly defined insomnia (ie, meeting inclusion criteria for a diagnosis from International Statistical Classification of Diseases, 10th Revision, DSM-IV, or Research Diagnostic Criteria/International Classification of Sleep Disorders, Second Edition) with costly workplace accidents and errors after excluding other chronic conditions among workers in the America Insomnia Survey (AIS). A national cross-sectional telephone survey (65.0% cooperation rate) of commercially insured health plan members selected from the more than 34 million in the HealthCore Integrated Research Database. Four thousand nine hundred ninety-one employed AIS respondents. Costly workplace accidents or errors in the 12 months before the AIS interview were assessed with one question about workplace accidents "that either caused damage or work disruption with a value of $500 or more" and another about other mistakes "that cost your company $500 or more." Current insomnia with duration of at least 12 months was assessed with the Brief Insomnia Questionnaire, a validated (area under the receiver operating characteristic curve, 0.86 compared with diagnoses based on blinded clinical reappraisal interviews), fully structured diagnostic interview. Eighteen other chronic conditions were assessed with medical/pharmacy claims records and validated self-report scales. Insomnia had a significant odds ratio with workplace accidents and/or errors controlled for other chronic conditions (1.4). The odds ratio did not vary significantly with respondent age, sex, educational level, or comorbidity. The average costs of insomnia-related accidents and errors ($32 062) were significantly higher than those of other accidents and errors ($21 914). Simulations estimated that insomnia was associated with 7.2% of all costly workplace accidents and errors and 23.7% of all the costs of these incidents. These proportions are higher than for any other chronic condition, with annualized US population projections of 274 000 costly insomnia-related workplace accidents and errors having a combined value of US $31.1 billion. Effectiveness trials are needed to determine whether expanded screening, outreach, and treatment of workers with insomnia would yield a positive return on investment for employers.

  2. A neural network for real-time retrievals of PWV and LWP from Arctic millimeter-wave ground-based observations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadeddu, M. P.; Turner, D. D.; Liljegren, J. C.

    2009-07-01

    This paper presents a new neural network (NN) algorithm for real-time retrievals of low amounts of precipitable water vapor (PWV) and integrated liquid water from millimeter-wave ground-based observations. Measurements are collected by the 183.3-GHz G-band vapor radiometer (GVR) operating at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility, Barrow, AK. The NN provides the means to explore the nonlinear regime of the measurements and investigate the physical boundaries of the operability of the instrument. A methodology to compute individual error bars associated with the NN output is developed, and a detailed error analysis of the network output is provided.more » Through the error analysis, it is possible to isolate several components contributing to the overall retrieval errors and to analyze the dependence of the errors on the inputs. The network outputs and associated errors are then compared with results from a physical retrieval and with the ARM two-channel microwave radiometer (MWR) statistical retrieval. When the NN is trained with a seasonal training data set, the retrievals of water vapor yield results that are comparable to those obtained from a traditional physical retrieval, with a retrieval error percentage of {approx}5% when the PWV is between 2 and 10 mm, but with the advantages that the NN algorithm does not require vertical profiles of temperature and humidity as input and is significantly faster computationally. Liquid water path (LWP) retrievals from the NN have a significantly improved clear-sky bias (mean of {approx}2.4 g/m{sup 2}) and a retrieval error varying from 1 to about 10 g/m{sup 2} when the PWV amount is between 1 and 10 mm. As an independent validation of the LWP retrieval, the longwave downwelling surface flux was computed and compared with observations. The comparison shows a significant improvement with respect to the MWR statistical retrievals, particularly for LWP amounts of less than 60 g/m{sup 2}.« less

  3. Estimation of 305 Day Milk Yield from Cumulative Monthly and Bimonthly Test Day Records in Indonesian Holstein Cattle

    NASA Astrophysics Data System (ADS)

    Rahayu, A. P.; Hartatik, T.; Purnomoadi, A.; Kurnianto, E.

    2018-02-01

    The aims of this study were to estimate 305 day first lactation milk yield of Indonesian Holstein cattle from cumulative monthly and bimonthly test day records and to analyze its accuracy.The first lactation records of 258 dairy cows from 2006 to 2014 consisted of 2571 monthly (MTDY) and 1281 bimonthly test day yield (BTDY) records were used. Milk yields were estimated by regression method. Correlation coefficients between actual and estimated milk yield by cumulative MTDY were 0.70, 0.78, 0.83, 0.86, 0.89, 0.92, 0.94 and 0.96 for 2-9 months, respectively, meanwhile by cumulative BTDY were 0.69, 0.81, 0.87 and 0.92 for 2, 4, 6 and 8 months, respectively. The accuracy of fitting regression models (R2) increased with the increasing in the number of cumulative test day used. The used of 5 cumulative MTDY was considered sufficient for estimating 305 day first lactation milk yield with 80.6% accuracy and 7% error percentage of estimation. The estimated milk yield from MTDY was more accurate than BTDY by 1.1 to 2% less error percentage in the same time.

  4. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  5. Adaptive Offset Correction for Intracortical Brain Computer Interfaces

    PubMed Central

    Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.

    2014-01-01

    Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868

  6. Adaptive offset correction for intracortical brain-computer interfaces.

    PubMed

    Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R

    2014-03-01

    Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

  7. Charge renormalization at the large-D limit for N-electron atoms and weakly bound systems

    NASA Astrophysics Data System (ADS)

    Kais, S.; Bleil, R.

    1995-05-01

    We develop a systematic way to determine an effective nuclear charge ZRD such that the Hartree-Fock results will be significantly closer to the exact energies by utilizing the analytically known large-D limit energies. This method yields an expansion for the effective nuclear charge in powers of (1/D), which we have evaluated to the first order. This first order approximation to the desired effective nuclear charge has been applied to two-electron atoms with Z=2-20, and weakly bound systems such as H-. The errors for the two-electron atoms when compared with exact results were reduced from ˜0.2% for Z=2 to ˜0.002% for large Z. Although usual Hartree-Fock calculations for H- show this to be unstable, our results reduce the percent error of the Hartree-Fock energy from 7.6% to 1.86% and predicts the anion to be stable. For N-electron atoms (N=3-18, Z=3-28), using only the zeroth order approximation for the effective charge significantly reduces the error of Hartree-Fock calculations and recovers more than 80% of the correlation energy.

  8. LACIE performance predictor final operational capability program description, volume 3

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.

  9. Digital implementation of a laser frequency stabilisation technique in the telecommunications band

    NASA Astrophysics Data System (ADS)

    Jivan, Pritesh; van Brakel, Adriaan; Manuel, Rodolfo Martínez; Grobler, Michael

    2016-02-01

    Laser frequency stabilisation in the telecommunications band was realised using the Pound-Drever-Hall (PDH) error signal. The transmission spectrum of the Fabry-Perot cavity was used as opposed to the traditionally used reflected spectrum. A comparison was done using an analogue as well as a digitally implemented system. This study forms part of an initial step towards developing a portable optical time and frequency standard. The frequency discriminator used in the experimental setup was a fibre-based Fabry-Perot etalon. The phase sensitive system made use of the optical heterodyne technique to detect changes in the phase of the system. A lock-in amplifier was used to filter and mix the input signals to generate the error signal. This error signal may then be used to generate a control signal via a PID controller. An error signal was realised at a wavelength of 1556 nm which correlates to an optical frequency of 1.926 THz. An implementation of the analogue PDH technique yielded an error signal with a bandwidth of 6.134 GHz, while a digital implementation yielded a bandwidth of 5.774 GHz.

  10. Planting data and wheat yield models. [Kansas, South Dakota, and U.S.S.R.

    NASA Technical Reports Server (NTRS)

    Feyerherm, A. M. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. A variable date starter model for spring wheat depending on temperature was more precise than a fixed date model. The same conclusions for fall-planted wheat were not reached. If the largest and smallest of eight temperatures were used to estimate daily maximum and minimum temperatures; respectively, a 1-4 F bias would be introduced into these extremes. For Kansas, a reduction of 0.5 bushels/acre in the root-mean-square-error between model and SRS yields was achieved by a six fold increase (7 to 42) in the density of weather stations. An additional reduction of 0.3 b/A was achieved by incorporating losses due to rusts in the model.

  11. Effects of fog on the bit-error rate of a free-space laser communication system.

    PubMed

    Strickland, B R; Lavan, M J; Woodbridge, E; Chan, V

    1999-01-20

    Free-space laser communication (lasercom) systems are subject to performance degradation when heavy fog or smoke obscures the line of sight. The bit-error rate (BER) of a high-bandwidth (570 Mbits/s) lasercom system was correlated with the atmospheric transmission over a folded path of 2.4 km. BER's of 10(-7) were observed when the atmospheric transmission was as low as 0.25%, whereas BER's of less than 10(-10) were observed when the transmission was above 2.5%. System performance was approximately 10 dB less than calculated, with the discrepancy attributed to scintillation, multiple scattering, and absorption. Peak power of the 810-nm communications laser was 186 mW, and the beam divergence was purposely degraded to 830 murad. These results were achieved without the use of error correction schemes or active tracking. An optimized system with narrower beam divergence and active tracking could be expected to yield significantly better performance.

  12. Analysis of MINIE2013 Explosion Air-Blast Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnurr, Julie M.; Rodgers, Arthur J.; Kim, Keehoon

    We report analysis of air-blast overpressure measurements from the MINIE2013 explosive experiments. The MINIE2013 experiment involved a series of nearly 70 near-surface (height-ofburst, HOB, ranging from -1 to +4 m) low-yield (W=2-20 kg TNT equivalent) chemical highexplosives tests that were recorded at local distances (230 m – 28.5 km). Many of the W and HOB combinations were repeated, allowing for quantification of the variability in air-blast features and corresponding yield estimates. We measured canonical signal features (peak overpressure, impulse per unit area, and positive pulse duration) from the air-blast data and compared these to existing air-blast models. Peak overpressure measurementsmore » showed good agreement with the models at close ranges but tended to attenuate more rapidly at longer range (~ 1 km), which is likely caused by upward refraction of acoustic waves due to a negative vertical gradient of sound speed. We estimated yields of the MINIE2013 explosions using the Integrated Yield Determination Tool (IYDT). Errors of the estimated yields were on average within 30% of the reported yields, and there were no significant differences in the accuracy of the IYDT predictions grouped by yield. IYDT estimates tend to be lower than ground truth yields, possibly because of reduced overpressure amplitudes by upward refraction. Finally, we report preliminary results on a development of a new parameterized air-blast waveform.« less

  13. Detecting and overcoming systematic errors in genome-scale phylogenies.

    PubMed

    Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé

    2007-06-01

    Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.

  14. Design of experiments-based monitoring of critical quality attributes for the spray-drying process of insulin by NIR spectroscopy.

    PubMed

    Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger

    2012-09-01

    Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.

  15. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  16. From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact

    PubMed Central

    Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael

    2005-01-01

    General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096

  17. Computer Simulations to Study Diffraction Effects of Stacking Faults in Beta-SiC: II. Experimental Verification. 2; Experimental Verification

    NASA Technical Reports Server (NTRS)

    Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)

    2000-01-01

    Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.

  18. EarthSat spring wheat yield system test 1975

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of an operational test of the EarthSat System during the period 1 June - 30 August 1975 over the spring wheat regions of North Dakota, South Dakota, and Minnesota are presented. The errors associated with each sub-element of the system during the operational test and the sensitivity of the complete system and each major functional sub-element of the system to the observed errors were evaluated. Evaluations and recommendations for future operational users of the system include: (1) changes in various system sub-elements, (2) changes in the yield model to affect improved accuracy, (3) changes in the number of geobased cells needed to develop an accurate aggregated yield estimate, (4) changes associated with the implementation of future operational satellites and data processing systems, and (5) detailed system documentation.

  19. Comparison of postoperative refractive outcomes: IOLMaster® versus immersion ultrasound.

    PubMed

    Whang, Woong-Joo; Jung, Byung-Ju; Oh, Tae-Hoon; Byun, Yong-Soo; Joo, Choun-Ki

    2012-01-01

    To compare the postoperative refractive outcomes between IOLMaster biometry (Carl Zeiss Meditec, Inc., Dublin, CA) and immersion ultrasound biometry for axial length measurements. Refractive outcomes in 354 eyes were compared using the IOLMaster and the immersion ultrasound biometry. Predicted refraction was determined using manual keratometry and the SRK-T formula with personalized A-constant. The axial lengths measured using the IOLMaster and immersion ultrasound were 24.49 ± 2.11 and 24.46 ± 2.11 mm, respectively, and the difference was significant (P < .05). The mean errors were 0.000 ± 0.578 D with the IOLMaster, and 0.000 ± 0.599 D with the immersion ultrasound, but the difference was not significant. The mean absolute error was smaller with the IOLMaster than with immersion ultrasound (0.463 ± 0.341 vs 0.479 ± 0.359 D), but the difference was not significant. IOLMaster biometry yields highly accurate results in cataract surgery. However, if the IOLMaster is unavailable, immersion ultrasound biometry with personalized intraocular lens constants is an acceptable alternative. Copyright 2012, SLACK Incorporated.

  20. Antineutrino analysis for continuous monitoring of nuclear reactors: Sensitivity study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Christopher; Erickson, Anna

    This paper explores the various contributors to uncertainty on predictions of the antineutrino source term which is used for reactor antineutrino experiments and is proposed as a safeguard mechanism for future reactor installations. The errors introduced during simulation of the reactor burnup cycle from variation in nuclear reaction cross sections, operating power, and other factors are combined with those from experimental and predicted antineutrino yields, resulting from fissions, evaluated, and compared. The most significant contributor to uncertainty on the reactor antineutrino source term when the reactor was modeled in 3D fidelity with assembly-level heterogeneity was found to be the uncertaintymore » on the antineutrino yields. Using the reactor simulation uncertainty data, the dedicated observation of a rigorously modeled small, fast reactor by a few-ton near-field detector was estimated to offer reduction of uncertainty on antineutrino yields in the 3.0–6.5 MeV range to a few percent for the primary power-producing fuel isotopes, even with zero prior knowledge of the yields.« less

  1. Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Stonedahl, F.

    2017-12-01

    Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error associated with geometric and harmonic means.

  2. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  3. Application of artificial intelligent tools to modeling of glucosamine preparation from exoskeleton of shrimp.

    PubMed

    Valizadeh, Hadi; Pourmahmood, Mohammad; Mojarrad, Javid Shahbazi; Nemati, Mahboob; Zakeri-Milani, Parvin

    2009-04-01

    The objective of this study was to forecast and optimize the glucosamine production yield from chitin (obtained from Persian Gulf shrimp) by means of genetic algorithm (GA), particle swarm optimization (PSO), and artificial neural networks (ANNs) as tools of artificial intelligence methods. Three factors (acid concentration, acid solution to chitin ratio, and reaction time) were used as the input parameters of the models investigated. According to the obtained results, the production yield of glucosamine hydrochloride depends linearly on acid concentration, acid solution to solid ratio, and time and also the cross-product of acid concentration and time and the cross-product of solids to acid solution ratio and time. The production yield significantly increased with an increase of acid concentration, acid solution ratio, and reaction time. The production yield is inversely related to the cross-product of acid concentration and time. It means that at high acid concentrations, the longer reaction times give lower production yields. The results revealed that the average percent error (PE) for prediction of production yield by GA, PSO, and ANN are 6.84, 7.11, and 5.49%, respectively. Considering the low PE, it might be concluded that these models have a good predictive power in the studied range of variables and they have the ability of generalization to unknown cases.

  4. Procedures for establishing and maintaining permanent plots for silvicultural and yield research.

    Treesearch

    Robert O. Curtis

    1983-01-01

    This paper reviews procedures for establishing and maintaining permanent plots for silvicultural and yield research; discusses purposes, sampling, and plot design; points out common errors; and makes recommendations for research plot designs and procedures for measuring and recording data.

  5. Association between split selection instability and predictive error in survival trees.

    PubMed

    Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T

    2006-01-01

    To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.

  6. Formaldehyde Distribution over North America: Implications for Satellite Retrievals of Formaldehyde Columns and Isoprene Emission

    NASA Technical Reports Server (NTRS)

    Millet, Dylan B.; Jacob, Daniel J.; Turquety, Solene; Hudman, Rynda C.; Wu, Shiliang; Anderson, Bruce E.; Fried, Alan; Walega, James; Heikes, Brian G.; Blake, Donald R.; hide

    2006-01-01

    Formaldehyde (HCHO) columns measured from space provide constraints on emissions of volatile organic compounds (VOCs). Quantitative interpretation requires characterization of errors in HCHO column retrievals and relating these columns to VOC emissions. Retrieval error is mainly in the air mass factor (AMF) which relates fitted backscattered radiances to vertical columns and requires external information on HCHO, aerosols, and clouds. Here we use aircraft data collected over North America and the Atlantic to determine the local relationships between HCHO columns and VOC emissions, calculate AMFs for HCHO retrievals, assess the errors in deriving AMFs with a chemical transport model (GEOS-Chem), and draw conclusions regarding space-based mapping of VOC emissions. We show that isoprene drives observed HCHO column variability over North America; HCHO column data from space can thus be used effectively as a proxy for isoprene emission. From observed HCHO and isoprene profiles we find an HCHO molar yield from isoprene oxidation of 1.6 +/- 0.5, consistent with current chemical mechanisms. Clouds are the primary error source in the AMF calculation; errors in the HCHO vertical profile and aerosols have comparatively little effect. The mean bias and 1Q uncertainty in the GEOS-Chem AMF calculation increase from <1% and 15% for clear skies to 17% and 24% for half-cloudy scenes. With fitting errors, this gives an overall 1 Q error in HCHO satellite measurements of 25-31%. Retrieval errors, combined with uncertainties in the HCHO yield from isoprene oxidation, result in a 40% (1sigma) error in inferring isoprene emissions from HCHO satellite measurements.

  7. Evaluation of NMME temperature and precipitation bias and forecast skill for South Asia

    NASA Astrophysics Data System (ADS)

    Cash, Benjamin A.; Manganello, Julia V.; Kinter, James L.

    2017-08-01

    Systematic error and forecast skill for temperature and precipitation in two regions of Southern Asia are investigated using hindcasts initialized May 1 from the North American Multi-Model Ensemble. We focus on two contiguous but geographically and dynamically diverse regions: the Extended Indian Monsoon Rainfall (70-100E, 10-30 N) and the nearby mountainous area of Pakistan and Afghanistan (60-75E, 23-39 N). Forecast skill is assessed using the Sign test framework, a rigorous statistical method that can be applied to non-Gaussian variables such as precipitation and to different ensemble sizes without introducing bias. We find that models show significant systematic error in both precipitation and temperature for both regions. The multi-model ensemble mean (MMEM) consistently yields the lowest systematic error and the highest forecast skill for both regions and variables. However, we also find that the MMEM consistently provides a statistically significant increase in skill over climatology only in the first month of the forecast. While the MMEM tends to provide higher overall skill than climatology later in the forecast, the differences are not significant at the 95% level. We also find that MMEMs constructed with a relatively small number of ensemble members per model can equal or outperform MMEMs constructed with more members in skill. This suggests some ensemble members either provide no contribution to overall skill or even detract from it.

  8. Second derivatives for approximate spin projection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less

  9. Estimation of small-scale soil erosion in laboratory experiments with Structure from Motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Balaguer-Puig, Matilde; Marqués-Mateu, Ángel; Lerma, José Luis; Ibáñez-Asensio, Sara

    2017-10-01

    The quantitative estimation of changes in terrain surfaces caused by water erosion can be carried out from precise descriptions of surfaces given by means of digital elevation models (DEMs). Some stages of water erosion research efforts are conducted in the laboratory using rainfall simulators and soil boxes with areas less than 1 m2. Under these conditions, erosive processes can lead to very small surface variations and high precision DEMs are needed to account for differences measured in millimetres. In this paper, we used a photogrammetric Structure from Motion (SfM) technique to build DEMs of a 0.5 m2 soil box to monitor several simulated rainfall episodes in the laboratory. The technique of DEM of difference (DoD) was then applied using GIS tools to compute estimates of volumetric changes between each pair of rainfall episodes. The aim was to classify the soil surface into three classes: erosion areas, deposition areas, and unchanged or neutral areas, and quantify the volume of soil that was eroded and deposited. We used a thresholding criterion of changes based on the estimated error of the difference of DEMs, which in turn was obtained from the root mean square error of the individual DEMs. Experimental tests showed that the choice of different threshold values in the DoD can lead to volume differences as large as 60% when compared to the direct volumetric difference. It turns out that the choice of that threshold was a key point in this method. In parallel to photogrammetric work, we collected sediments from each rain episode and obtained a series of corresponding measured sediment yields. The comparison between computed and measured sediment yields was significantly correlated, especially when considering the accumulated value of the five simulations. The computed sediment yield was 13% greater than the measured sediment yield. The procedure presented in this paper proved to be suitable for the determination of sediment yields in rainfall-driven soil erosion experiments conducted in the laboratory.

  10. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  11. Abstract Syntax in Sentence Production: Evidence from Stem-Exchange Errors

    ERIC Educational Resources Information Center

    Lane, Liane Wardlow; Ferreira, Victor S.

    2010-01-01

    Three experiments tested theories of syntactic representation by assessing "stem-exchange" errors ("hates the record"[right arrow]"records the hate"). Previous research has shown that in stem exchanges, speakers pronounce intended nouns ("REcord") as verbs ("reCORD"), yielding syntactically well-formed utterances. By "lexically based" theories,…

  12. Computer Programs for the Semantic Differential: Further Modifications.

    ERIC Educational Resources Information Center

    Lawson, Edwin D.; And Others

    The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…

  13. Simulating eroded soil organic carbon with the SWAT-C model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong

    The soil erosion and associated lateral movement of eroded carbon (C) have been identified as a possible mechanism explaining the elusive terrestrial C sink of ca. 1.7-2.6 PgC yr(-1). Here we evaluated the SWAT-C model for simulating long-term soil erosion and associated eroded C yields. Our method couples the CENTURY carbon cycling processes with a Modified Universal Soil Loss Equation (MUSLE) to estimate C losses associated with soil erosion. The results show that SWAT-C is able to simulate well long-term average eroded C yields, as well as correctly estimate the relative magnitude of eroded C yields by crop rotations. Wemore » also evaluated three methods of calculating C enrichment ratio in mobilized sediments, and found that errors associated with enrichment ratio estimation represent a significant uncertainty in SWAT-C simulations. Furthermore, we discussed limitations and future development directions for SWAT-C to advance C cycling modeling and assessment.« less

  14. What is the evidence for retrieval problems in the elderly?

    PubMed

    White, N; Cunningham, W R

    1982-01-01

    To determine whether older adults experience particular problems with retrieval, groups of young and elderly adults were given free recall and recognition tests of supraspan lists of unrelated words. Analysis of number of words correctly recalled and recognized yielded a significant age by retention test interaction: greater age differences were observed for recall than for recognition. In a second analysis of words recalled and recognized, corrected for guessing, the interaction disappeared. It was concluded that previous interpretations that age by retention test interactions are indicative of retrieval problems of the elderly may have been confounded by methodological problems. Furthermore, it was suggested that researchers in aging and memory need to be explicit in identifying their underlying models of error processes when analyzing recognition scores: different error models may lead to different results and interpretations.

  15. Techniques for avoiding discrimination errors in the dynamic sampling of condensable vapors

    NASA Technical Reports Server (NTRS)

    Lincoln, K. A.

    1983-01-01

    In the mass spectrometric sampling of dynamic systems, measurements of the relative concentrations of condensable and noncondensable vapors can be significantly distorted if some subtle, but important, instrumental factors are overlooked. Even with in situ measurements, the condensables are readily lost to the container walls, and the noncondensables can persist within the vacuum chamber and yield a disproportionately high output signal. Where single pulses of vapor are sampled this source of error is avoided by gating either the mass spectrometer ""on'' or the data acquisition instrumentation ""on'' only during the very brief time-window when the initial vapor cloud emanating directly from the vapor source passes through the ionizer. Instrumentation for these techniques is detailed and its effectiveness is demonstrated by comparing gated and nongated spectra obtained from the pulsed-laser vaporization of several materials.

  16. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Daniel; Westover, Tyler; Howe, Daniel

    Here, we report here on an experimental study to produce refinery-ready fuel blendstocks via catalytic hydrodeoxygenation (upgrading) of pyrolysis oil using several biomass feedstocks and various blends. Blends were tested along with the pure materials to determine the effect of blending on product yields and qualities. Within experimental error, oil yields from fast pyrolysis and upgrading are shown to be linear functions of the blend components. Switchgrass exhibited lower fast pyrolysis and upgrading yields than the woody samples, which included clean pine, oriented strand board (OSB), and a mix of pinon and juniper (PJ). The notable exception was PJ, formore » which the poor upgrading yield of 18% was likely associated with the very high viscosity of the PJ fast pyrolysis oil (947 cp). The highest fast pyrolysis yield (54% dry basis) was obtained from clean pine, while the highest upgrading yield (50%) was obtained from a blend of 80% clean pine and 20% OSB (CP 8OSB 2). For switchgrass, reducing the fast pyrolysis temperature to 450 degrees C resulted in a significant increase to the pyrolysis oil yield and reduced hydrogen consumption during hydrotreating, but did not directly affect the hydrotreating oil yield. The water content of fast pyrolysis oils was also observed to increase linearly with the summed content of potassium and sodium, ranging from 21% for clean pine to 37% for switchgrass. Multiple linear regression models demonstrate that fast pyrolysis is strongly dependent upon the contents lignin and volatile matter as well as the sum of potassium and sodium.« less

  18. Simulation of corn yields and parameters uncertainties analysis in Hebei and Sichuang, China

    NASA Astrophysics Data System (ADS)

    Fu, A.; Xue, Y.; Hartman, M. D.; Chandran, A.; Qiu, B.; Liu, Y.

    2016-12-01

    Corn is one of most important agricultural production in China. Research on the impacts of climate change and human activities on corn yields is important in understanding and mitigating the negative effects of environmental factors on corn yields and maintaining the stable corn production. Using climatic data, including daily temperature, precipitation, and solar radiation from 1948 to 2010, soil properties, observed corn yields, and farmland management information, corn yields in Sichuang and Hebei Provinces of China in the past 63 years were simulated using the Daycent model, and the results was evaluated using Root mean square errors, bias, simulation efficiency, and standard deviation. The primary climatic factors influencing corn yields were examined, the uncertainties of climatic factors was analyzed, and the uncertainties of human activity parameters were also studied by changing fertilization levels and cultivated ways. The results showed that: (1) Daycent model is capable to simulate corn yields in Sichuang and Hebei provinces of China. Observed and simulated corn yields have the similar increasing trend with time. (2) The minimum daily temperature is the primary factor influencing corn yields in Sichuang. In Hebei Province, daily temperature, precipitation and wind speed significantly affect corn yields.(3) When the global warming trend of original data was removed, simulated corn yields were lower than before, decreased by about 687 kg/hm2 from 1992 to 2010; When the fertilization levels, cultivated ways were increased and decreased by 50% and 75%, respectively in the Schedule file in Daycent model, the simulated corn yields increased by 1206 kg/hm2 and 776 kg/hm2, respectively, with the enhancement of fertilization level and the improvement of cultivated way. This study provides a scientific base for selecting a suitable fertilization level and cultivated way in corn fields in China.

  19. LACIE - An application of meteorology for United States and foreign wheat assessment

    NASA Technical Reports Server (NTRS)

    Hill, J. D.; Strommen, N. D.; Sakamoto, C. M.; Leduc, S. K.

    1980-01-01

    This paper describes the overall Large Area Crop Inventory Experiment technical approach utilizing the global weather-reporting network and the Landsat satellite to make a quasi-operational application of existing research results, and the accomplishments of this cooperative experiment in utilizing the weather information. Global weather data were utilized in preparing timely yield estimates for selected areas of the U.S. Great Plains, the U.S.S.R. and Canada. Additionally, wheat yield models were developed and pilot tested for Brazil, Australia, India and Argentina. The results of the work show that heading dates for wheat in North America can be predicted with an average absolute error of about 5 days for winter wheat and 4 days for spring wheat. Independent tests of wheat yield models over a 10-year period for the U.S. Great Plains produced a root-mean-square error of 1.12 quintals per hectare (q/ha) while similar tests in the U.S.S.R. produced an error of 1.31 q/ha. Research designed to improve the initial capability is described as is the rationale for further evolution of a capability to monitor global climate and assess its impact on world food supplies.

  20. Assessing the Effects of Climate Variability on Orange Yield in Florida to Reduce Production Forecast Errors

    NASA Astrophysics Data System (ADS)

    Concha Larrauri, P.

    2015-12-01

    Orange production in Florida has experienced a decline over the past decade. Hurricanes in 2004 and 2005 greatly affected production, almost to the same degree as strong freezes that occurred in the 1980's. The spread of the citrus greening disease after the hurricanes has also contributed to a reduction in orange production in Florida. The occurrence of hurricanes and diseases cannot easily be predicted but the additional effects of climate on orange yield can be studied and incorporated into existing production forecasts that are based on physical surveys, such as the October Citrus forecast issued every year by the USDA. Specific climate variables ocurring before and after the October forecast is issued can have impacts on flowering, orange drop rates, growth, and maturation, and can contribute to the forecast error. Here we present a methodology to incorporate local climate variables to predict the USDA's orange production forecast error, and we study the local effects of climate on yield in different counties in Florida. This information can aid farmers to gain an insight on what is to be expected during the orange production cycle, and can help supply chain managers to better plan their strategy.

  1. Prediction of protein tertiary structure from sequences using a very large back-propagation neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X.; Wilcox, G.L.

    1993-12-31

    We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less

  2. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  3. Regression Equations for Monthly and Annual Mean and Selected Percentile Streamflows for Ungaged Rivers in Maine

    USGS Publications Warehouse

    Dudley, Robert W.

    2015-12-03

    The largest average errors of prediction are associated with regression equations for the lowest streamflows derived for months during which the lowest streamflows of the year occur (such as the 5 and 1 monthly percentiles for August and September). The regression equations have been derived on the basis of streamflow and basin characteristics data for unregulated, rural drainage basins without substantial streamflow or drainage modifications (for example, diversions and (or) regulation by dams or reservoirs, tile drainage, irrigation, channelization, and impervious paved surfaces), therefore using the equations for regulated or urbanized basins with substantial streamflow or drainage modifications will yield results of unknown error. Input basin characteristics derived using techniques or datasets other than those documented in this report or using values outside the ranges used to develop these regression equations also will yield results of unknown error.

  4. Teamwork and error in the operating room: analysis of skills and roles.

    PubMed

    Catchpole, K; Mishra, A; Handa, A; McCulloch, P

    2008-04-01

    To analyze the effects of surgical, anesthetic, and nursing teamwork skills on technical outcomes. The value of team skills in reducing adverse events in the operating room is presently receiving considerable attention. Current work has not yet identified in detail how the teamwork and communication skills of surgeons, anesthetists, and nurses affect the course of an operation. Twenty-six laparoscopic cholecystectomies and 22 carotid endarterectomies were studied using direct observation methods. For each operation, teams' skills were scored for the whole team, and for nursing, surgical, and anesthetic subteams on 4 dimensions (leadership and management [LM]; teamwork and cooperation; problem solving and decision making; and situation awareness). Operating time, errors in surgical technique, and other procedural problems and errors were measured as outcome parameters for each operation. The relationships between teamwork scores and these outcome parameters within each operation were examined using analysis of variance and linear regression. Surgical (F(2,42) = 3.32, P = 0.046) and anesthetic (F(2,42) = 3.26, P = 0.048) LM had significant but opposite relationships with operating time in each operation: operating time increased significantly with higher anesthetic but decreased with higher surgical LM scores. Errors in surgical technique had a strong association with surgical situation awareness (F(2,42) = 7.93, P < 0.001) in each operation. Other procedural problems and errors were related to the intraoperative LM skills of the nurses (F(5,1) = 3.96, P = 0.027). Detailed analysis of team interactions and dimensions is feasible and valuable, yielding important insights into relationships between nontechnical skills, technical performance, and operative duration. These results support the concept that interventions designed to improve teamwork and communication may have beneficial effects on technical performance and patient outcome.

  5. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  6. Integrated model for predicting rice yield with climate change

    NASA Astrophysics Data System (ADS)

    Park, Jin-Ki; Das, Amrita; Park, Jong-Hwa

    2018-04-01

    Rice is the chief agricultural product and one of the primary food source. For this reason, it is of pivotal importance for worldwide economy and development. Therefore, in a decision-support-system both for the farmers and in the planning and management of the country's economy, forecasting yield is vital. However, crop yield, which is a dependent of the soil-bio-atmospheric system, is difficult to represent in statistical language. This paper describes a novel approach for predicting rice yield using artificial neural network, spatial interpolation, remote sensing and GIS methods. Herein, the variation in the yield is attributed to climatic parameters and crop health, and the normalized difference vegetation index from MODIS is used as an indicator of plant health and growth. Due importance was given to scaling up the input parameters using spatial interpolation and GIS and minimising the sources of error in every step of the modelling. The low percentage error (2.91) and high correlation (0.76) signifies the robust performance of the proposed model. This simple but effective approach is then used to estimate the influence of climate change on South Korean rice production. As proposed in the RCP8.5 scenario, an upswing in temperature may increase the rice yield throughout South Korea.

  7. Trade-off between reservoir yield and evaporation losses as a function of lake morphology in semi-arid Brazil.

    PubMed

    Campos, José N B; Lima, Iran E; Studart, Ticiana M C; Nascimento, Luiz S V

    2016-05-31

    This study investigates the relationships between yield and evaporation as a function of lake morphology in semi-arid Brazil. First, a new methodology was proposed to classify the morphology of 40 reservoirs in the Ceará State, with storage capacities ranging from approximately 5 to 4500 hm3. Then, Monte Carlo simulations were conducted to study the effect of reservoir morphology (including real and simplified conical forms) on the water storage process at different reliability levels. The reservoirs were categorized as convex (60.0%), slightly convex (27.5%) or linear (12.5%). When the conical approximation was used instead of the real lake form, a trade-off occurred between reservoir yield and evaporation losses, with different trends for the convex, slightly convex and linear reservoirs. Using the conical approximation, the water yield prediction errors reached approximately 5% of the mean annual inflow, which is negligible for large reservoirs. However, for smaller reservoirs, this error became important. Therefore, this paper presents a new procedure for correcting the yield-evaporation relationships that were obtained by assuming a conical approximation rather than the real reservoir morphology. The combination of this correction with the Regulation Triangle Diagram is useful for rapidly and objectively predicting reservoir yield and evaporation losses in semi-arid environments.

  8. Computerized assessment of sustained attention: interactive effects of task demand, noise, and anxiety.

    PubMed

    Ballard, J C

    1996-12-01

    In a sample of 163 college undergraduates, the effects of task demand, noise, and anxiety on Continuous Performance Test (CPT) errors were evaluated with multiple regression and multivariate analysis of variance. Results indicated significantly more omission errors on the difficult task. Complex interaction effects of noise and self-reported anxiety yielded more omissions in quiet intermittent white noise, particularly for high-anxious subjects performing the difficult task. Anxiety levels tended to increase from pretest to posttest, particularly for low-anxious subjects in the quiet, difficult-task condition, while a decrease was seen for high-anxious subjects in the loud, easy-task condition. Commission errors were unrelated to any predictor variables, suggesting that "attention" cannot be considered a unitary phenomenon. The variety of direct and interactive effects on vigilance performance underscore the need for clinicians to use a variety of measures to assess attentional skills, to avoid diagnosis of attention deficits on the basis of a single computerized task performance, and to rule out anxiety and other contributors to poor vigilance task performance.

  9. The limits of crop productivity: validating theoretical estimates and determining the factors that limit crop yields in optimal environments

    NASA Technical Reports Server (NTRS)

    Bugbee, B.; Monje, O.

    1992-01-01

    Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.

  10. Simulation of relationship between river discharge and sediment yield in the semi-arid river watersheds

    NASA Astrophysics Data System (ADS)

    Khaleghi, Mohammad Reza; Varvani, Javad

    2018-02-01

    Complex and variable nature of the river sediment yield caused many problems in estimating the long-term sediment yield and problems input into the reservoirs. Sediment Rating Curves (SRCs) are generally used to estimate the suspended sediment load of the rivers and drainage watersheds. Since the regression equations of the SRCs are obtained by logarithmic retransformation and have a little independent variable in this equation, they also overestimate or underestimate the true sediment load of the rivers. To evaluate the bias correction factors in Kalshor and Kashafroud watersheds, seven hydrometric stations of this region with suitable upstream watershed and spatial distribution were selected. Investigation of the accuracy index (ratio of estimated sediment yield to observed sediment yield) and the precision index of different bias correction factors of FAO, Quasi-Maximum Likelihood Estimator (QMLE), Smearing, and Minimum-Variance Unbiased Estimator (MVUE) with LSD test showed that FAO coefficient increases the estimated error in all of the stations. Application of MVUE in linear and mean load rating curves has not statistically meaningful effects. QMLE and smearing factors increased the estimated error in mean load rating curve, but that does not have any effect on linear rating curve estimation.

  11. Increasing point-count duration increases standard error

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.

    1998-01-01

    We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.

  12. Hadronic light-by-light scattering contribution to the muon anomalous magnetic moment from lattice QCD

    DOE PAGES

    Blum, Thomas; Chowdhury, Saumitra; Hayakawa, Masashi; ...

    2015-01-07

    The form factor that yields the light-by-light scattering contribution to the muon anomalous magnetic moment is computed in lattice QCD+QED and QED. A non-perturbative treatment of QED is used and is checked against perturbation theory. The hadronic contribution is calculated for unphysical quark and muon masses, and only the diagram with a single quark loop is computed. Statistically significant signals are obtained. Initial results appear promising, and the prospect for a complete calculation with physical masses and controlled errors is discussed.

  13. High-efficient Extraction of Drainage Networks from Digital Elevation Model Data Constrained by Enhanced Flow Enforcement from Known River Map

    NASA Astrophysics Data System (ADS)

    Wu, T.; Li, T.; Li, J.; Wang, G.

    2017-12-01

    Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.

  14. Estimation of groundwater consumption by phreatophytes using diurnal water table fluctuations: A saturated‐unsaturated flow assessment

    USGS Publications Warehouse

    Loheide, Steven P.; Butler, James J.; Gorelick, Steven M.

    2005-01-01

    Groundwater consumption by phreatophytes is a difficult‐to‐measure but important component of the water budget in many arid and semiarid environments. Over the past 70 years the consumptive use of groundwater by phreatophytes has been estimated using a method that analyzes diurnal trends in hydrographs from wells that are screened across the water table (White, 1932). The reliability of estimates obtained with this approach has never been rigorously evaluated using saturated‐unsaturated flow simulation. We present such an evaluation for common flow geometries and a range of hydraulic properties. Results indicate that the major source of error in the White method is the uncertainty in the estimate of specific yield. Evapotranspirative consumption of groundwater will often be significantly overpredicted with the White method if the effects of drainage time and the depth to the water table on specific yield are ignored. We utilize the concept of readily available specific yield as the basis for estimation of the specific yield value appropriate for use with the White method. Guidelines are defined for estimating readily available specific yield based on sediment texture. Use of these guidelines with the White method should enable the evapotranspirative consumption of groundwater to be more accurately quantified.

  15. Comparing risk in conventional and organic dairy farming in the Netherlands: an empirical analysis.

    PubMed

    Berentsen, P B M; Kovacs, K; van Asseldonk, M A P M

    2012-07-01

    This study was undertaken to contribute to the understanding of why most dairy farmers do not convert to organic farming. Therefore, the objective of this research was to assess and compare risks for conventional and organic farming in the Netherlands with respect to gross margin and the underlying price and production variables. To investigate the risk factors a farm accountancy database was used containing panel data from both conventional and organic representative Dutch dairy farms (2001-2007). Variables with regard to price and production risk were identified using a gross margin analysis scheme. Price risk variables were milk price and concentrate price. The main production risk variables were milk yield per cow, roughage yield per hectare, and veterinary costs per cow. To assess risk, an error component implicit detrending method was applied and the resulting detrended standard deviations were compared between conventional and organic farms. Results indicate that the risk included in the gross margin per cow is significantly higher in organic farming. This is caused by both higher price and production risks. Price risks are significantly higher in organic farming for both milk price and concentrate price. With regard to production risk, only milk yield per cow poses a significantly higher risk in organic farming. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. A model based on feature objects aided strategy to evaluate the methane generation from food waste by anaerobic digestion.

    PubMed

    Yu, Meijuan; Zhao, Mingxing; Huang, Zhenxing; Xi, Kezhong; Shi, Wansheng; Ruan, Wenquan

    2018-02-01

    A model based on feature objects (FOs) aided strategy was used to evaluate the methane generation from food waste by anaerobic digestion. The kinetics of feature objects was tested by the modified Gompertz model and the first-order kinetic model, and the first-order kinetic hydrolysis constants were used to estimate the reaction rate of homemade and actual food waste. The results showed that the methane yields of four feature objects were significantly different. The anaerobic digestion of homemade food waste and actual food waste had various methane yields and kinetic constants due to the different contents of FOs in food waste. Combining the kinetic equations with the multiple linear regression equation could well express the methane yield of food waste, as the R 2 of food waste was more than 0.9. The predictive methane yields of the two actual food waste were 528.22 mL g -1  TS and 545.29 mL g -1  TS with the model, while the experimental values were 527.47 mL g -1  TS and 522.1 mL g -1  TS, respectively. The relative error between the experimental cumulative methane yields and the predicted cumulative methane yields were both less than 5%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Development and investigation of single-scan TV radiography for the acquisition of dynamic physiologic data

    NASA Technical Reports Server (NTRS)

    Baily, N. A.

    1975-01-01

    A light amplifier for large flat screen fluoroscopy was investigated which will decrease both its size and weight. The work on organ contouring was extended to yield volumes. This is a simple extension since the fluoroscopic image contains density (gray scale) information which can be translated as tissue thickness, integrated, yielding accurate volume data in an on-line situation. A number of devices were developed for analog image processing of video signals, operating on-line in real time, and with simple selection mechanisms. The results show that this approach is feasible and produces are improvement in image quality which should make diagnostic error significantly lower. These are all low cost devices, small and light in weight, thereby making them usable in a space environment, on the Ames centrifuge, and in a typical clinical situation.

  18. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.

  19. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.

  20. Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles

    PubMed Central

    Sarwar, A.; Nemirovski, A.; Shapiro, B.

    2011-01-01

    Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell’s equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm3 volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm3), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths. PMID:23335834

  1. Flame exposure time on Langmuir probe degradation, ion density, and thermionic emission for flame temperature.

    PubMed

    Doyle, S J; Salvador, P R; Xu, K G

    2017-11-01

    The paper examines the effect of exposure time of Langmuir probes in an atmospheric premixed methane-air flame. The effects of probe size and material composition on current measurements were investigated, with molybdenum and tungsten probe tips ranging in diameter from 0.0508 to 0.1651 mm. Repeated prolonged exposures to the flame, with five runs of 60 s, resulted in gradual probe degradations (-6% to -62% area loss) which affected the measurements. Due to long flame exposures, two ion saturation currents were observed, resulting in significantly different ion densities ranging from 1.16 × 10 16 to 2.71 × 10 19 m -3 . The difference between the saturation currents is caused by thermionic emissions from the probe tip. As thermionic emission is temperature dependent, the flame temperature could thus be estimated from the change in current. The flame temperatures calculated from the difference in saturation currents (1734-1887 K) were compared to those from a conventional thermocouple (1580-1908 K). Temperature measurements obtained from tungsten probes placed in rich flames yielded the highest percent error (9.66%-18.70%) due to smaller emission current densities at lower temperatures. The molybdenum probe yielded an accurate temperature value with only 1.29% error. Molybdenum also demonstrated very low probe degradation in comparison to the tungsten probe tips (area reductions of 6% vs. 58%, respectively). The results also show that very little exposure time (<5 s) is needed to obtain a valid ion density measurement and that prolonged flame exposures can yield the flame temperature but also risks damage to the Langmuir probe tip.

  2. Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles.

    PubMed

    Sarwar, A; Nemirovski, A; Shapiro, B

    2012-03-01

    Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell's equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm(3) volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm(3)), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths.

  3. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  4. Open quantum systems and error correction

    NASA Astrophysics Data System (ADS)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This is a complementary to the second chapter which is published in [Shabani and Lidar, 2007]. In the last chapter 7 before the conclusion, a formulation for evaluating the performance of quantum error correcting codes for a general error model is presented, also published in [Shabani, 2005]. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery.

  5. The Driver Behaviour Questionnaire: a North American analysis.

    PubMed

    Cordazzo, Sheila T D; Scialfa, Charles T; Bubric, Katherine; Ross, Rachel Jones

    2014-09-01

    The Driver Behaviour Questionnaire (DBQ), originally developed in Britain by Reason et al. [Reason, J., Manstead, A., Stradling, S., Baxter, J., & Campbell, K. (1990). Errors and violations on the road: A real distinction? Ergonomics, 33, 1315-1332] is one of the most widely used instruments for measuring driver behaviors linked to collision risk. The goals of the study were to adapt the DBQ for a North American driving population, assess the component structure of the items, and to determine whether scores on the DBQ could predict self-reported traffic collisions. Of the original Reason et al. items, our data indicate a two-component solution involving errors and violations. Evidence for a Lapses component was not found. The 20 items most closely resembling those of Parker et al. [Parker, D., Reason, J. T., Manstead, A. S. R., & Stradling, S. G. (1995). Driving errors, driving violations and accident involvement. Ergonomics, 38, 1036-1048] yielded a solution with 3 orthogonal components that reflect errors, lapses, and violations. Although violations and Lapses were positively and significantly correlated with self-reported collision involvement, the classification accuracy of the resulting models was quite poor. A North American DBQ has the same component structure as reported previously, but has limited ability to predict self-reported collisions. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.

  6. Prophylactic Bracing Has No Effect on Lower Extremity Alignment or Functional Performance.

    PubMed

    Hueber, Garrett A; Hall, Emily A; Sage, Brad W; Docherty, Carrie L

    2017-07-01

    Prophylactic ankle bracing is commonly used during physical activity. Understanding how bracing affects body mechanics is critically important when discussing both injury prevention and sport performance. The purpose is to determine if ankle bracing affects lower extremity mechanics during the Landing Error Scoring System test (LESS) and Sage Sway Index (SSI). Thirty physically active participants volunteered for this study. Participants completed the LESS and SSI in both a braced and unsupported conditions. Total errors were recorded for the LESS. Total errors and time (seconds) were recorded for the SSI. The Wilcoxon signed-rank test was utilized to evaluate any differences between the brace conditions for each dependent variable. A priori alpha level was set at p<0.05. The Wilcoxon signed-rank test yielded no significant difference between the braced and unsupported conditions for the LESS (Z=-0.35, p=0.72), SSI time (Z=-0.36, p=0.72), or SSI Errors (Z=-0.37, p=0.71). Ankle braces had no effect on subjective clinical assessments of lower extremity alignment or postural stability. Utilization of a prophylactic support at the ankle did not substantially alter the proximal components of the lower kinetic chain. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia

    PubMed Central

    Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.

    2009-01-01

    Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107

  8. Three-dimensional quantitative structure-activity relationship CoMSIA/CoMFA and LeapFrog studies on novel series of bicyclo [4.1.0] heptanes derivatives as melanin-concentrating hormone receptor R1 antagonists.

    PubMed

    Morales-Bayuelo, Alejandro; Ayazo, Hernan; Vivas-Reyes, Ricardo

    2010-10-01

    Comparative molecular similarity indices analysis (CoMSIA) and comparative molecular field analysis (CoMFA) were performed on a series of bicyclo [4.1.0] heptanes derivatives as melanin-concentrating hormone receptor R1 antagonists (MCHR1 antagonists). Molecular superimposition of antagonists on the template structure was performed by database alignment method. The statistically significant model was established on sixty five molecules, which were validated by a test set of ten molecules. The CoMSIA model yielded the best predictive model with a q(2) = 0.639, non cross-validated R(2) of 0.953, F value of 92.802, bootstrapped R(2) of 0.971, standard error of prediction = 0.402, and standard error of estimate = 0.146 while the CoMFA model yielded a q(2) = 0.680, non cross-validated R(2) of 0.922, F value of 114.351, bootstrapped R(2) of 0.925, standard error of prediction = 0.364, and standard error of estimate = 0.180. CoMFA analysis maps were employed for generating a pseudo cavity for LeapFrog calculation. The contour maps obtained from 3D-QSAR studies were appraised for activity trends for the molecules analyzed. The results show the variability of steric and electrostatic contributions that determine the activity of the MCHR1 antagonist, with these results we proposed new antagonists that may be more potent than previously reported, these novel antagonists were designed from the addition of highly electronegative groups in the substituent di(i-C(3)H(7))N- of the bicycle [4.1.0] heptanes, using the model CoMFA which also was used for the molecular design using the technique LeapFrog. The data generated from the present study will further help to design novel, potent, and selective MCHR1 antagonists. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  9. Impact of human error on lumber yield in rough mills

    Treesearch

    Urs Buehlmann; R. Edward Thomas; R. Edward Thomas

    2002-01-01

    Rough sawn, kiln-dried lumber contains characteristics such as knots and bark pockets that are considered by most people to be defects. When using boards to produce furniture components, these defects are removed to produce clear, defect-free parts. Currently, human operators identify and locate the unusable board areas containing defects. Errors in determining a...

  10. A false positive food chain error associated with a generic predator gut content ELISA

    USDA-ARS?s Scientific Manuscript database

    Conventional prey-specific gut content ELISA and PCR assays are useful for identifying predators of insect pests in nature. However, these assays are prone to yielding certain types of food chain errors. For instance, it is possible that prey remains can pass through the food chain as the result of ...

  11. Metabolite and transcript markers for the prediction of potato drought tolerance.

    PubMed

    Sprenger, Heike; Erban, Alexander; Seddig, Sylvia; Rudack, Katharina; Thalhammer, Anja; Le, Mai Q; Walther, Dirk; Zuther, Ellen; Köhl, Karin I; Kopka, Joachim; Hincha, Dirk K

    2018-04-01

    Potato (Solanum tuberosum L.) is one of the most important food crops worldwide. Current potato varieties are highly susceptible to drought stress. In view of global climate change, selection of cultivars with improved drought tolerance and high yield potential is of paramount importance. Drought tolerance breeding of potato is currently based on direct selection according to yield and phenotypic traits and requires multiple trials under drought conditions. Marker-assisted selection (MAS) is cheaper, faster and reduces classification errors caused by noncontrolled environmental effects. We analysed 31 potato cultivars grown under optimal and reduced water supply in six independent field trials. Drought tolerance was determined as tuber starch yield. Leaf samples from young plants were screened for preselected transcript and nontargeted metabolite abundance using qRT-PCR and GC-MS profiling, respectively. Transcript marker candidates were selected from a published RNA-Seq data set. A Random Forest machine learning approach extracted metabolite and transcript markers for drought tolerance prediction with low error rates of 6% and 9%, respectively. Moreover, by combining transcript and metabolite markers, the prediction error was reduced to 4.3%. Feature selection from Random Forest models allowed model minimization, yielding a minimal combination of only 20 metabolite and transcript markers that were successfully tested for their reproducibility in 16 independent agronomic field trials. We demonstrate that a minimum combination of transcript and metabolite markers sampled at early cultivation stages predicts potato yield stability under drought largely independent of seasonal and regional agronomic conditions. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  12. Permanent-plot procedures for silvicultural and yield research.

    Treesearch

    Robert O. Curtis; David D. Marshall

    2005-01-01

    This paper reviews purposes and procedures for establishing and maintaining permanent plots for silvicultural and yield research, sampling and plot design, common errors, and procedures for measuring and recording data. It is a revision and update of a 1983 publication. Although some details are specific to coastal Pacific Northwest conditions, most of the material is...

  13. Evaluation of seeding depth and guage-wheel load effects on maize emergence and yield

    USDA-ARS?s Scientific Manuscript database

    Planting represents perhaps the most important field operation with errors likely to negatively affect crop yield and thereby farm profitability. Performance of row-crop planters are evaluated by their ability to accurately place seeds into the soil at an adequate and pre-determined depth, the goal ...

  14. Assimilating Remote Sensing Observations of Leaf Area Index and Soil Moisture for Wheat Yield Estimates: An Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Nearing, Grey S.; Crow, Wade T.; Thorp, Kelly R.; Moran, Mary S.; Reichle, Rolf H.; Gupta, Hoshin V.

    2012-01-01

    Observing system simulation experiments were used to investigate ensemble Bayesian state updating data assimilation of observations of leaf area index (LAI) and soil moisture (theta) for the purpose of improving single-season wheat yield estimates with the Decision Support System for Agrotechnology Transfer (DSSAT) CropSim-Ceres model. Assimilation was conducted in an energy-limited environment and a water-limited environment. Modeling uncertainty was prescribed to weather inputs, soil parameters and initial conditions, and cultivar parameters and through perturbations to model state transition equations. The ensemble Kalman filter and the sequential importance resampling filter were tested for the ability to attenuate effects of these types of uncertainty on yield estimates. LAI and theta observations were synthesized according to characteristics of existing remote sensing data, and effects of observation error were tested. Results indicate that the potential for assimilation to improve end-of-season yield estimates is low. Limitations are due to a lack of root zone soil moisture information, error in LAI observations, and a lack of correlation between leaf and grain growth.

  15. Wildlife management by habitat units: A preliminary plan of action

    NASA Technical Reports Server (NTRS)

    Frentress, C. D.; Frye, R. G.

    1975-01-01

    Procedures for yielding vegetation type maps were developed using LANDSAT data and a computer assisted classification analysis (LARSYS) to assist in managing populations of wildlife species by defined area units. Ground cover in Travis County, Texas was classified on two occasions using a modified version of the unsupervised approach to classification. The first classification produced a total of 17 classes. Examination revealed that further grouping was justified. A second analysis produced 10 classes which were displayed on printouts which were later color-coded. The final classification was 82 percent accurate. While the classification map appeared to satisfactorily depict the existing vegetation, two classes were determined to contain significant error. The major sources of error could have been eliminated by stratifying cluster sites more closely among previously mapped soil associations that are identified with particular plant associations and by precisely defining class nomenclature using established criteria early in the analysis.

  16. DIRBoost-an algorithm for boosting deformable image registration: application to lung CT intra-subject registration.

    PubMed

    Muenzing, Sascha E A; van Ginneken, Bram; Viergever, Max A; Pluim, Josien P W

    2014-04-01

    We introduce a boosting algorithm to improve on existing methods for deformable image registration (DIR). The proposed DIRBoost algorithm is inspired by the theory on hypothesis boosting, well known in the field of machine learning. DIRBoost utilizes a method for automatic registration error detection to obtain estimates of local registration quality. All areas detected as erroneously registered are subjected to boosting, i.e. undergo iterative registrations by employing boosting masks on both the fixed and moving image. We validated the DIRBoost algorithm on three different DIR methods (ANTS gSyn, NiftyReg, and DROP) on three independent reference datasets of pulmonary image scan pairs. DIRBoost reduced registration errors significantly and consistently on all reference datasets for each DIR algorithm, yielding an improvement of the registration accuracy by 5-34% depending on the dataset and the registration algorithm employed. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. CBP for Field Workers – Results and Insights from Three Usability and Interface Design Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Bly, Aaron Douglas

    2015-09-01

    Nearly all activities that involve human interaction with the systems in a nuclear power plant are guided by procedures. Even though the paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety, improving procedure use could yield significant savings in increased efficiency as well as improved nuclear safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use and adherence, researchers in the Light-Water Reactor Sustainability (LWRS) Program, togethermore » with the nuclear industry, have been investigating the possibility and feasibility of replacing the current paper-based procedure process with a computer-based procedure (CBP) system. This report describes a field evaluation of new design concepts of a prototype computer-based procedure system.« less

  18. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  19. Monte Carlo proton dose calculations using a radiotherapy specific dual-energy CT scanner for tissue segmentation and range assessment

    NASA Astrophysics Data System (ADS)

    Almeida, Isabel P.; Schyns, Lotte E. J. R.; Vaniqui, Ana; van der Heyden, Brent; Dedes, George; Resch, Andreas F.; Kamp, Florian; Zindler, Jaap D.; Parodi, Katia; Landry, Guillaume; Verhaegen, Frank

    2018-06-01

    Proton beam ranges derived from dual-energy computed tomography (DECT) images from a dual-spiral radiotherapy (RT)-specific CT scanner were assessed using Monte Carlo (MC) dose calculations. Images from a dual-source and a twin-beam DECT scanner were also used to establish a comparison to the RT-specific scanner. Proton ranges extracted from conventional single-energy CT (SECT) were additionally performed to benchmark against literature values. Using two phantoms, a DECT methodology was tested as input for GEANT4 MC proton dose calculations. Proton ranges were calculated for different mono-energetic proton beams irradiating both phantoms; the results were compared to the ground truth based on the phantom compositions. The same methodology was applied in a head-and-neck cancer patient using both SECT and dual-spiral DECT scans from the RT-specific scanner. A pencil-beam-scanning plan was designed, which was subsequently optimized by MC dose calculations, and differences in proton range for the different image-based simulations were assessed. For phantoms, the DECT method yielded overall better material segmentation with  >86% of the voxel correctly assigned for the dual-spiral and dual-source scanners, but only 64% for a twin-beam scanner. For the calibration phantom, the dual-spiral scanner yielded range errors below 1.2 mm (0.6% of range), like the errors yielded by the dual-source scanner (<1.1 mm, <0.5%). With the validation phantom, the dual-spiral scanner yielded errors below 0.8 mm (0.9%), whereas SECT yielded errors up to 1.6 mm (2%). For the patient case, where the absolute truth was missing, proton range differences between DECT and SECT were on average in  ‑1.2  ±  1.2 mm (‑0.5%  ±  0.5%). MC dose calculations were successfully performed on DECT images, where the dual-spiral scanner resulted in media segmentation and range accuracy as good as the dual-source CT. In the patient, the various methods showed relevant range differences.

  20. Global Velocities from VLBI

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; Gordon, David; MacMillan, Daniel

    1999-01-01

    Precise geodetic Very Long Baseline Interferometry (VLBI) measurements have been made since 1979 at about 130 points on all major tectonic plates, including stable interiors and deformation zones. From the data set of about 2900 observing sessions and about 2.3 million observations, useful three-dimensional velocities can be derived for about 80 sites using an incremental least-squares adjustment of terrestrial, celestial, Earth rotation and site/session-specific parameters. The long history and high precision of the data yield formal errors for horizontal velocity as low as 0.1 mm/yr, but the limitation on the interpretation of individual site velocities is the tie to the terrestrial reference frame. Our studies indicate that the effect of converting precise relative VLBI velocities to individual site velocities is an error floor of about 0.4 mm/yr. Most VLBI horizontal velocities in stable plate interiors agree with the NUVEL-1A model, but there are significant departures in Africa and the Pacific. Vertical precision is worse by a factor of 2-3, and there are significant non-zero values that can be interpreted as post-glacial rebound, regional effects, and local disturbances.

  1. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    PubMed

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing

    PubMed Central

    Matochko, Wadim L.; Derda, Ratmir

    2013-01-01

    Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071

  3. Catalytic hydroprocessing of fast pyrolysis oils: Impact of biomass feedstock on process efficiency

    DOE PAGES

    Carpenter, Daniel; Westover, Tyler; Howe, Daniel; ...

    2016-12-01

    Here, we report here on an experimental study to produce refinery-ready fuel blendstocks via catalytic hydrodeoxygenation (upgrading) of pyrolysis oil using several biomass feedstocks and various blends. Blends were tested along with the pure materials to determine the effect of blending on product yields and qualities. Within experimental error, oil yields from fast pyrolysis and upgrading are shown to be linear functions of the blend components. Switchgrass exhibited lower fast pyrolysis and upgrading yields than the woody samples, which included clean pine, oriented strand board (OSB), and a mix of pinon and juniper (PJ). The notable exception was PJ, formore » which the poor upgrading yield of 18% was likely associated with the very high viscosity of the PJ fast pyrolysis oil (947 cp). The highest fast pyrolysis yield (54% dry basis) was obtained from clean pine, while the highest upgrading yield (50%) was obtained from a blend of 80% clean pine and 20% OSB (CP 8OSB 2). For switchgrass, reducing the fast pyrolysis temperature to 450 degrees C resulted in a significant increase to the pyrolysis oil yield and reduced hydrogen consumption during hydrotreating, but did not directly affect the hydrotreating oil yield. The water content of fast pyrolysis oils was also observed to increase linearly with the summed content of potassium and sodium, ranging from 21% for clean pine to 37% for switchgrass. Multiple linear regression models demonstrate that fast pyrolysis is strongly dependent upon the contents lignin and volatile matter as well as the sum of potassium and sodium.« less

  4. A novel registration-based methodology for prediction of trabecular bone fabric from clinical QCT: A comprehensive analysis

    PubMed Central

    Reyes, Mauricio; Zysset, Philippe

    2017-01-01

    Osteoporosis leads to hip fractures in aging populations and is diagnosed by modern medical imaging techniques such as quantitative computed tomography (QCT). Hip fracture sites involve trabecular bone, whose strength is determined by volume fraction and orientation, known as fabric. However, bone fabric cannot be reliably assessed in clinical QCT images of proximal femur. Accordingly, we propose a novel registration-based estimation of bone fabric designed to preserve tensor properties of bone fabric and to map bone fabric by a global and local decomposition of the gradient of a non-rigid image registration transformation. Furthermore, no comprehensive analysis on the critical components of this methodology has been previously conducted. Hence, the aim of this work was to identify the best registration-based strategy to assign bone fabric to the QCT image of a patient’s proximal femur. The normalized correlation coefficient and curvature-based regularization were used for image-based registration and the Frobenius norm of the stretch tensor of the local gradient was selected to quantify the distance among the proximal femora in the population. Based on this distance, closest, farthest and mean femora with a distinction of sex were chosen as alternative atlases to evaluate their influence on bone fabric prediction. Second, we analyzed different tensor mapping schemes for bone fabric prediction: identity, rotation-only, rotation and stretch tensor. Third, we investigated the use of a population average fabric atlas. A leave one out (LOO) evaluation study was performed with a dual QCT and HR-pQCT database of 36 pairs of human femora. The quality of the fabric prediction was assessed with three metrics, the tensor norm (TN) error, the degree of anisotropy (DA) error and the angular deviation of the principal tensor direction (PTD). The closest femur atlas (CTP) with a full rotation (CR) for fabric mapping delivered the best results with a TN error of 7.3 ± 0.9%, a DA error of 6.6 ± 1.3% and a PTD error of 25 ± 2°. The closest to the population mean femur atlas (MTP) using the same mapping scheme yielded only slightly higher errors than CTP for substantially less computing efforts. The population average fabric atlas yielded substantially higher errors than the MTP with the CR mapping scheme. Accounting for sex did not bring any significant improvements. The identified fabric mapping methodology will be exploited in patient-specific QCT-based finite element analysis of the proximal femur to improve the prediction of hip fracture risk. PMID:29176881

  5. Estimating error rates for firearm evidence identifications in forensic science

    PubMed Central

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  6. Estimating error rates for firearm evidence identifications in forensic science.

    PubMed

    Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan

    2018-03-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.

  7. Nematode Damage Functions: The Problems of Experimental and Sampling Error

    PubMed Central

    Ferris, H.

    1984-01-01

    The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865

  8. Pressing the Approach: A NASA Study of 19 Recent Accidents Yields a New Perspective on Pilot Error

    NASA Technical Reports Server (NTRS)

    Berman, Benjamin A.; Dismukes, R. Key

    2007-01-01

    This article begins with a review of two sample airplane accidents that were caused by pilot error. The analysis of these and 17 other accidents suggested that almost all experienced pilot operating in the same environment in which the accident crews were operating and knowing only what the accident crews knew at each moment of the flight, would be vulnerable to making a similar decision and similar errors. Whether a particular crew in a given situation makes errors depends on somewhat random interaction of factors. Two themes that seem to be prevalent in these cases are: Plan Continuation Bias, and Snowballing Workload.

  9. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  10. Prevalence of uncorrected refractive errors among children aged 3-10 years in western Saudi Arabia

    PubMed Central

    Alrahili, Nojood Hameed R.; Jadidy, Esraa S.; Alahmadi, Bayan Sulieman H.; Abdula’al, Mohammed F.; Jadidy, Alaa S.; Alhusaini, Abdulaziz A.; Mojaddidi, Moaz A.; Al-Barry, Maan A.

    2017-01-01

    Objectives: To determine the prevalence of uncorrected refractive errors (URE) among children 3-10 years and to affirm the necessity of a national school-based visual screening program for school-aged children. Methods: This retrospective cross-sectional study was conducted in Medina, Saudi Arabia in 2015. Children were selected through a multistage stratified random sampling from 8 kindergarten and 8 primary schools. Those included were screened to diagnose UREs using a visual acuity chart and an auto refractometer according to American guidelines. The prevalence and types of UREs were estimated. Results: Of the 2121 children enumerated, 1893 were examined, yielding a response rate of 89.3%. The prevalence of UREs was 34.9% (95% CI = 32.8%-37.1%), with significant differences in different age groups. The prevalence of astigmatism (25.3%) was higher compared to that of anisometropia (7.4%), hypermetropia (1.5%), and myopia (0.7%). Risk of uncorrected refractive error was positively associated with age, and this was noted in astigmatism, myopia, and anisometropia. In addition, the risk of hypermetropia was associated with boys and that of myopia was associated with girls. Conclusions: The prevalence of UREs, particularly astigmatism, was high among children aged 3-10 years in Medina, with significant age differences. Vision screening programs targeting kindergarten and primary schoolchildren are crucial to lessen the risk of preventable visual impairment due to UREs. PMID:28762432

  11. Prevalence of uncorrected refractive errors among children aged 3-10 years in western Saudi Arabia.

    PubMed

    Alrahili, Nojood Hameed R; Jadidy, Esraa S; Alahmadi, Bayan Sulieman H; Abdula'al, Mohammed F; Jadidy, Alaa S; Alhusaini, Abdulaziz A; Mojaddidi, Moaz A; Al-Barry, Maan A

    2017-08-01

    To determine the prevalence of uncorrected refractive errors (URE) among children 3-10 years and to affirm the necessity of a national school-based visual screening program for school-aged children. Methods: This retrospective cross-sectional study was conducted in Medina, Saudi Arabia in 2015. Children were selected through a multistage stratified random sampling from 8 kindergarten and 8 primary schools. Those included were screened to diagnose UREs using a visual acuity chart and an auto refractometer according to American guidelines. The prevalence and types of UREs were estimated. Results: Of the 2121 children enumerated, 1893 were examined, yielding a response rate of 89.3%. The prevalence of UREs was 34.9% (95% CI = 32.8%-37.1%), with significant differences in different age groups. The prevalence of astigmatism (25.3%) was higher compared to that of anisometropia (7.4%), hypermetropia (1.5%), and myopia (0.7%). Risk of uncorrected refractive error was positively associated with age, and this was noted in astigmatism, myopia, and anisometropia. In addition, the risk of hypermetropia was associated with boys and that of myopia was associated with girls. Conclusions: The prevalence of UREs, particularly astigmatism, was high among children aged 3-10 years in Medina, with significant age differences. Vision screening programs targeting kindergarten and primary schoolchildren are crucial to lessen the risk of preventable visual impairment due to UREs.

  12. Optimization of multimagnetometer systems on a spacecraft

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1975-01-01

    The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.

  13. Spectroscopic ellipsometer based on direct measurement of polarization ellipticity.

    PubMed

    Watkins, Lionel R

    2011-06-20

    A polarizer-sample-Wollaston prism analyzer ellipsometer is described in which the ellipsometric angles ψ and Δ are determined by direct measurement of the elliptically polarized light reflected from the sample. With the Wollaston prism initially set to transmit p- and s-polarized light, the azimuthal angle P of the polarizer is adjusted until the two beams have equal intensity. This condition yields ψ=±P and ensures that the reflected elliptically polarized light has an azimuthal angle of ±45° and maximum ellipticity. Rotating the Wollaston prism through 45° and adjusting the analyzer azimuth until the two beams again have equal intensity yields the ellipticity that allows Δ to be determined via a simple linear relationship. The errors produced by nonideal components are analyzed. We show that the polarizer dominates these errors but that for most practical purposes, the error in ψ is negligible and the error in Δ may be corrected exactly. A native oxide layer on a silicon substrate was measured at a single wavelength and multiple angles of incidence and spectroscopically at a single angle of incidence. The best fit film thicknesses obtained were in excellent agreement with those determined using a traditional null ellipsometer.

  14. Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation

    PubMed Central

    De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan

    2017-01-01

    In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436

  15. Propagation of resist heating mask error to wafer level

    NASA Astrophysics Data System (ADS)

    Babin, S. V.; Karklin, Linard

    2006-10-01

    As technology is approaching 45 nm and below the IC industry is experiencing a severe product yield hit due to rapidly shrinking process windows and unavoidable manufacturing process variations. Current EDA tools are unable by their nature to deliver optimized and process-centered designs that call for 'post design' localized layout optimization DFM tools. To evaluate the impact of different manufacturing process variations on final product it is important to trace and evaluate all errors through design to manufacturing flow. Photo mask is one of the critical parts of this flow, and special attention should be paid to photo mask manufacturing process and especially to mask tight CD control. Electron beam lithography (EBL) is a major technique which is used for fabrication of high-end photo masks. During the writing process, resist heating is one of the sources for mask CD variations. Electron energy is released in the mask body mainly as heat, leading to significant temperature fluctuations in local areas. The temperature fluctuations cause changes in resist sensitivity, which in turn leads to CD variations. These CD variations depend on mask writing speed, order of exposure, pattern density and its distribution. Recent measurements revealed up to 45 nm CD variation on the mask when using ZEP resist. The resist heating problem with CAR resists is significantly smaller compared to other types of resists. This is partially due to higher resist sensitivity and the lower exposure dose required. However, there is no data yet showing CD errors on the wafer induced by CAR resist heating on the mask. This effect can be amplified by high MEEF values and should be carefully evaluated at 45nm and below technology nodes where tight CD control is required. In this paper, we simulated CD variation on the mask due to resist heating; then a mask pattern with the heating error was transferred onto the wafer. So, a CD error on the wafer was evaluated subject to only one term of the mask error budget - the resist heating CD error. In simulation of exposure using a stepper, variable MEEF was considered.

  16. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  17. Testing the accuracy of growth and yield models for southern hardwood forests

    Treesearch

    H. Michael Rauscher; Michael J. Young; Charles D. Webb; Daniel J. Robison

    2000-01-01

    The accuracy of ten growth and yield models for Southern Appalachian upland hardwood forests and southern bottomland forests was evaluated. In technical applications, accuracy is the composite of both bias (average error) and precision. Results indicate that GHAT, NATPIS, and a locally calibrated version of NETWIGS may be regarded as being operationally valid...

  18. MCCE2: improving protein pKa calculations with extensive side chain rotamer sampling.

    PubMed

    Song, Yifan; Mao, Junjun; Gunner, M R

    2009-11-15

    Multiconformation continuum electrostatics (MCCE) explores different conformational degrees of freedom in Monte Carlo calculations of protein residue and ligand pK(a)s. Explicit changes in side chain conformations throughout a titration create a position dependent, heterogeneous dielectric response giving a more accurate picture of coupled ionization and position changes. The MCCE2 methods for choosing a group of input heavy atom and proton positions are described. The pK(a)s calculated with different isosteric conformers, heavy atom rotamers and proton positions, with different degrees of optimization are tested against a curated group of 305 experimental pK(a)s in 33 proteins. QUICK calculations, with rotation around Asn and Gln termini, sampling His tautomers and torsion minimum hydroxyls yield an RMSD of 1.34 with 84% of the errors being <1.5 pH units. FULL calculations adding heavy atom rotamers and side chain optimization yield an RMSD of 0.90 with 90% of the errors <1.5 pH unit. Good results are also found for pK(a)s in the membrane protein bacteriorhodopsin. The inclusion of extra side chain positions distorts the dielectric boundary and also biases the calculated pK(a)s by creating more neutral than ionized conformers. Methods for correcting these errors are introduced. Calculations are compared with multiple X-ray and NMR derived structures in 36 soluble proteins. Calculations with X-ray structures give significantly better pK(a)s. Results with the default protein dielectric constant of 4 are as good as those using a value of 8. The MCCE2 program can be downloaded from http://www.sci.ccny.cuny.edu/~mcce. 2009 Wiley Periodicals, Inc.

  19. Bone orientation and position estimation errors using Cosserat point elements and least squares methods: Application to gait.

    PubMed

    Solav, Dana; Camomilla, Valentina; Cereatti, Andrea; Barré, Arnaud; Aminian, Kamiar; Wolf, Alon

    2017-09-06

    The aim of this study was to analyze the accuracy of bone pose estimation based on sub-clusters of three skin-markers characterized by triangular Cosserat point elements (TCPEs) and to evaluate the capability of four instantaneous physical parameters, which can be measured non-invasively in vivo, to identify the most accurate TCPEs. Moreover, TCPE pose estimations were compared with the estimations of two least squares minimization methods applied to the cluster of all markers, using rigid body (RBLS) and homogeneous deformation (HDLS) assumptions. Analysis was performed on previously collected in vivo treadmill gait data composed of simultaneous measurements of the gold-standard bone pose by bi-plane fluoroscopy tracking the subjects' knee prosthesis and a stereophotogrammetric system tracking skin-markers affected by soft tissue artifact. Femur orientation and position errors estimated from skin-marker clusters were computed for 18 subjects using clusters of up to 35 markers. Results based on gold-standard data revealed that instantaneous subsets of TCPEs exist which estimate the femur pose with reasonable accuracy (median root mean square error during stance/swing: 1.4/2.8deg for orientation, 1.5/4.2mm for position). A non-invasive and instantaneous criteria to select accurate TCPEs for pose estimation (4.8/7.3deg, 5.8/12.3mm), was compared with RBLS (4.3/6.6deg, 6.9/16.6mm) and HDLS (4.6/7.6deg, 6.7/12.5mm). Accounting for homogeneous deformation, using HDLS or selected TCPEs, yielded more accurate position estimations than RBLS method, which, conversely, yielded more accurate orientation estimations. Further investigation is required to devise effective criteria for cluster selection that could represent a significant improvement in bone pose estimation accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Handheld Automated Microsurgical Instrumentation for Intraocular Laser Surgery

    PubMed Central

    Yang, Sungwook; Lobes, Louis A.; Martel, Joseph N.; Riviere, Cameron N.

    2016-01-01

    Background and Objective Laser photocoagulation is a mainstay or adjuvant treatment for a variety of common retinal diseases. Automated laser photocoagulation during intraocular surgery has not yet been established. The authors introduce an automated laser photocoagulation system for intraocular surgery, based on a novel handheld instrument. The goals of the system are to enhance accuracy and efficiency and improve safety. Materials and Methods Triple-ring patterns are introduced as a typical arrangement for the treatment of proliferative retinopathy and registered to a preoperative fundus image. In total, 32 target locations are specified along the circumferences of three rings having diameters of 1, 2, and 3 mm, with a burn spacing of 600 μm. Given the initial system calibration, the retinal surface is reconstructed using stereo vision, and the targets specified on the preoperative image are registered with the control system. During automated operation, the laser probe attached to the manipulator of the active handheld instrument is deflected as needed via visual servoing in order to correct the error between the aiming beam and a specified target, regardless of any erroneous handle motion by the surgeon. A constant distance of the laser probe from the retinal surface is maintained in order to yield consistent size of burns and ensure safety during operation. Real-time tracking of anatomical features enables compensation for any movement of the eye. A graphical overlay system within operating microscope provides the surgeon with guidance cues for automated operation. Two retinal surgeons performed automated and manual trials in an artificial model of the eye, with each trial repeated three times. For the automated trials, various targeting thresholds (50–200 μm) were used to automatically trigger laser firing. In manual operation, fixed repetition rates were used, with frequencies of 1.0–2.5 Hz. The power of the 532 nm laser was set at 3.0 W with a duration of 20 ms. After completion of each trial, the speed of operation and placement error of burns were measured. The performance of the automated laser photocoagulation was compared with manual operation, using interpolated data for equivalent firing rates from 1.0 to 1.75 Hz. Results In automated trials, average error increased from 45 ± 27 to 60 ± 37 μm as the targeting threshold varied from 50 to 200 μm, while average firing rate significantly increased from 0.69 to 1.71 Hz. The average error in the manual trials increased from 102 ± 67 to 174 ± 98 μm as firing rate increased from 1.0 to 2.5 Hz. Compared to the manual trials, the average error in the automated trials was reduced by 53.0–56.4%, resulting in statistically significant differences (P ≤ 10−20) for all equivalent frequencies (1.0–1.75 Hz). The depth of the laser tip in the automated trials was consistently maintained within 18 ± 2 μm root-mean-square (RMS) of its initial position, whereas it significantly varied in the manual trials, yielding an error of 296 ± 30 μm RMS. At high firing rates in manual trials, such as at 2.5 Hz, laser photocoagulation is marginally attained, yielding failed burns of 30% over the entire pattern, whereas no failed burns are found in automated trials. Relatively regular burn sizes are attained in the automated trials by the depth servoing of the laser tip, while burn sizes in the manual trials vary considerably. Automated avoidance of blood vessels was also successfully demonstrated, utilizing the retina-tracking feature to identify avoidance zones. Conclusion Automated intraocular laser surgery can improve the accuracy of photocoagulation while ensuring safety during operation. This paper provides an initial demonstration of the technique under reasonably realistic laboratory conditions; development of a clinically applicable system requires further work. PMID:26287813

  1. Measurement of CP Asymmetries and Branching Fractions in B0 -> pi+ pi-, B0 -> K+ pi-, B0 -> pi0 pi0, B0 -> K0 pi0 and Isospin Analysis of B -> pi pi Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubert, Bernard; Bona, M.; Karyotakis, Y.

    2008-08-01

    The authors present preliminary results of improved measurements of the CP-violating asymmetries and branching fractions in the decays B{sup 0} {yields} {pi}{sup +}{pi}{sup -}, B{sup 0} {yields} K{sup +}{pi}{sup -}, B{sup 0} {yields} {pi}{sup 0}{pi}{sup 0}, and B{sup 0} {yields} K{sup 0}{pi}{sup 0}. This update includes all data taken at the {Upsilon}(4S) resonance by the BABAR experiment at the asymmetric PEP-II B-meson factory at SLAC, corresponding to 467 {+-} 5 million B{bar B} pairs. They find S{sub {pi}{pi}} = -0.68 {+-} 0.10 {+-} 0.03, C{sub {pi}{pi}} = -0.25 {+-} 0.08 {+-} 0.02, {Alpha}{sub K{sub {pi}}} = -0.107 {+-} 0.016{sub -0.004},{supmore » +0.006}, C{sub {pi}{sup 0}{pi}{sup 0}} = -0.43 {+-} 0.26 {+-} 0.05, {Beta}(B{sup 0} {yields} {pi}{sup 0}{pi}{sup 0}) = (1.83 {+-} 0.21 {+-} 0.13) x 10{sup -6}, {Beta}(B{sup 0} {yields} K{sup 0}{pi}{sup 0}) = (10.1 {+-} 0.6 {+-} 0.4) x 10{sup -6}, where the first error is statistical and the second is systematic. They observe CP violation with a significance of 6.7{sigma} in B{sup 0} {yields} {pi}{sup -} and 6.1{sigma} in B{sup 0} {yields} K{sup +}{pi}{sup -}. Constraints on the Unitarity Triangle angle {alpha} are determined from the isospin relation between all B {yields} {pi}{pi} rates and asymmetries.« less

  2. Holistic, model-based optimization of edge leveling as an enabler for lithographic focus control: application to a memory use case

    NASA Astrophysics Data System (ADS)

    Hasan, T.; Kang, Y.-S.; Kim, Y.-J.; Park, S.-J.; Jang, S.-Y.; Hu, K.-Y.; Koop, E. J.; Hinnen, P. C.; Voncken, M. M. A. J.

    2016-03-01

    Advancement of the next generation technology nodes and emerging memory devices demand tighter lithographic focus control. Although the leveling performance of the latest-generation scanners is state of the art, challenges remain at the wafer edge due to large process variations. There are several customer configurable leveling control options available in ASML scanners, some of which are application specific in their scope of leveling improvement. In this paper, we assess the usability of leveling non-correctable error models to identify yield limiting edge dies. We introduce a novel dies-inspec based holistic methodology for leveling optimization to guide tool users in selecting an optimal configuration of leveling options. Significant focus gain, and consequently yield gain, can be achieved with this integrated approach. The Samsung site in Hwaseong observed an improved edge focus performance in a production of a mid-end memory product layer running on an ASML NXT 1960 system. 50% improvement in focus and a 1.5%p gain in edge yield were measured with the optimized configurations.

  3. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  4. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  5. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  6. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  7. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  8. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Perricone, B. T.

    1982-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  9. Coupling constant for N*(1535)N{rho}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin

    2008-05-15

    The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.

  10. Should drivers be operating within an automation-free bandwidth? Evaluating haptic steering support systems with different levels of authority.

    PubMed

    Petermeijer, Sebastiaan M; Abbink, David A; de Winter, Joost C F

    2015-02-01

    The aim of this study was to compare continuous versus bandwidth haptic steering guidance in terms of lane-keeping behavior, aftereffects, and satisfaction. An important human factors question is whether operators should be supported continuously or only when tolerance limits are exceeded. We aimed to clarify this issue for haptic steering guidance by investigating costs and benefits of both approaches in a driving simulator. Thirty-two participants drove five trials, each with a different level of haptic support: no guidance (Manual); guidance outside a 0.5-m bandwidth (Band1); a hysteresis version of Band1, which guided back to the lane center once triggered (Band2); continuous guidance (Cont); and Cont with double feedback gain (ContS). Participants performed a reaction time task while driving. Toward the end of each trial, the guidance was unexpectedly disabled to investigate aftereffects. All four guidance systems prevented large lateral errors (>0.7 m). Cont and especially ContS yielded smaller lateral errors and higher time to line crossing than Manual, Band1, and Band2. Cont and ContS yielded short-lasting aftereffects, whereas Band1 and Band2 did not. Cont yielded higher self-reported satisfaction and faster reaction times than Band1. Continuous and bandwidth guidance both prevent large driver errors. Continuous guidance yields improved performance and satisfaction over bandwidth guidance at the cost of aftereffects and variability in driver torque (indicating human-automation conflicts). The presented results are useful for designers of haptic guidance systems and support critical thinking about the costs and benefits of automation support systems.

  11. Are Shunt Revisions Associated with IQ in Congenital Hydrocephalus? A Meta -Analysis.

    PubMed

    Arrington, C Nikki; Ware, Ashley L; Ahmed, Yusra; Kulesz, Paulina A; Dennis, Maureen; Fletcher, Jack M

    2016-12-01

    Although it is generally acknowledged that shunt revisions are associated with reductions in cognitive functions in individuals with congenital hydrocephalus, the literature yields mixed results and is inconclusive. The current study used meta-analytic methods to empirically synthesize studies addressing the association of shunt revisions and IQ in individuals with congenital hydrocephalus. Six studies and three in-house datasets yielded 11 independent samples for meta-analysis. Groups representing lower and higher numbers of shunt revisions were coded to generate effect sizes for differences in IQ scores. Mean effect size across studies was statistically significant, but small (Hedges' g = 0.25, p < 0.001, 95 % CI [0.08, 0.43]) with more shunt revisions associated with lower IQ scores. Results show an association of lower IQ and more shunt revisions of about 3 IQ points, a small effect, but within the error of measurement associated with IQ tests. Although clinical significance of this effect is not clear, results suggest that repeated shunt revisions because of shunt failure is associated with a reduction in cognitive functions.

  12. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  13. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  14. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  15. Analysis of the Capability and Limitations of Relativistic Gravity Measurements Using Radio Astronomy Methods

    NASA Technical Reports Server (NTRS)

    Shapiro, I. I.; Counselman, C. C., III

    1975-01-01

    The uses of radar observations of planets and very-long-baseline radio interferometric observations of extragalactic objects to test theories of gravitation are described in detail with special emphasis on sources of error. The accuracy achievable in these tests with data already obtained, can be summarized in terms of: retardation of signal propagation (radar), deflection of radio waves (interferometry), advance of planetary perihelia (radar), gravitational quadrupole moment of sun (radar), and time variation of gravitational constant (radar). The analyses completed to date have yielded no significant disagreement with the predictions of general relativity.

  16. Numerical prediction of a draft tube flow taking into account uncertain inlet conditions

    NASA Astrophysics Data System (ADS)

    Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy

    2012-11-01

    The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.

  17. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  18. Evaluating the sensitivity of agricultural model performance to different climate inputs

    PubMed Central

    Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.

    2017-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985

  19. Photosynthetic and Canopy Characteristics of Different Varieties at the Early Elongation Stage and Their Relationships with the Cane Yield in Sugarcane

    PubMed Central

    Luo, Jun; Pan, Yong-Bao; Xu, Liping; Zhang, Yuye; Zhang, Hua; Chen, Rukai

    2014-01-01

    During sugarcane growth, the Early Elongation stage is critical to cane yield formation. In this study, parameters of 17 sugarcane varieties were determined at the Early Elongation stage using CI-301 photosynthesis measuring system and CI-100 digital plant canopy imager. The data analysis showed highly significant differences in leaf area index (LAI), mean foliage inclination angle (MFIA), transmission coefficient for diffused light penetration (TD), transmission coefficient for solar beam radiation penetration (TR), leaf distribution (LD), net photosynthetic rate (PN), transpiration rate (E), and stomatal conductance (GS) among sugarcane varieties. Based on the photosynthetic or canopy parameters, the 17 sugarcane varieties were classified into four categories. Through the factor analysis, nine parameters were represented by three principal factors, of which the cumulative rate of variance contributions reached 85.77%. A regression for sugarcane yield, with relative error of yield fitting less than 0.05, was successfully established: sugarcane yield = −27.19 − 1.69 × PN + 0.17 ×  E + 90.43 × LAI − 408.81 × LD + 0.0015 × NSH + 101.38 ×  D (R 2 = 0.928**). This study helps provide a theoretical basis and technical guidance for the screening of new sugarcane varieties with high net photosynthetic rate and ideal canopy structure. PMID:25045742

  20. Homing by path integration when a locomotion trajectory crosses itself.

    PubMed

    Yamamoto, Naohide; Meléndez, Jayleen A; Menzies, Derek T

    2014-01-01

    Path integration is a process with which navigators derive their current position and orientation by integrating self-motion signals along a locomotion trajectory. It has been suggested that path integration becomes disproportionately erroneous when the trajectory crosses itself. However, there is a possibility that this previous finding was confounded by effects of the length of a traveled path and the amount of turns experienced along the path, two factors that are known to affect path integration performance. The present study was designed to investigate whether the crossover of a locomotion trajectory truly increases errors of path integration. In an experiment, blindfolded human navigators were guided along four paths that varied in their lengths and turns, and attempted to walk directly back to the beginning of the paths. Only one of the four paths contained a crossover. Results showed that errors yielded from the path containing the crossover were not always larger than those observed in other paths, and the errors were attributed solely to the effects of longer path lengths or greater degrees of turns. These results demonstrated that path crossover does not always cause significant disruption in path integration processes. Implications of the present findings for models of path integration are discussed.

  1. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  2. Trends in the suspended-sediment yields of coastal rivers of northern California, 1955–2010

    USGS Publications Warehouse

    Warrick, J.A.; Madej, Mary Ann; Goñi, M. A.; Wheatcroft, R.A.

    2013-01-01

    Time-dependencies of suspended-sediment discharge from six coastal watersheds of northern California – Smith River, Klamath River, Trinity River, Redwood Creek, Mad River, and Eel River – were evaluated using monitoring data from 1955 to 2010. Suspended-sediment concentrations revealed time-dependent hysteresis and multi-year trends. The multi-year trends had two primary patterns relative to river discharge: (i) increases in concentration resulting from both land clearing from logging and the flood of record during December 1964 (water year 1965), and (ii) continual decreases in concentration during the decades following this flood. Data from the Eel River revealed that changes in suspended-sediment concentrations occurred for all grain-size fractions, but were most pronounced for the sand fraction. Because of these changes, the use of bulk discharge-concentration relationships (i.e., “sediment rating curves”) without time-dependencies in these relationships resulted in substantial errors in sediment load estimates, including 2.5-fold over-prediction of Eel River sediment loads since 1979. We conclude that sediment discharge and sediment discharge relationships (such as sediment rating curves) from these coastal rivers have varied substantially with time in response to land use and climate. Thus, the use of historical river sediment data and sediment rating curves without considerations for time-dependent trends may result in significant errors in sediment yield estimates from the globally-important steep, small watersheds.

  3. THE MASS-RICHNESS RELATION OF MaxBCG CLUSTERS FROM QUASAR LENSING MAGNIFICATION USING VARIABILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Anne H.; Baltay, Charles; Ellman, Nancy

    2012-04-10

    Accurate measurement of galaxy cluster masses is an essential component not only in studies of cluster physics but also for probes of cosmology. However, different mass measurement techniques frequently yield discrepant results. The Sloan Digital Sky Survey MaxBCG catalog's mass-richness relation has previously been constrained using weak lensing shear, Sunyaev-Zeldovich (SZ), and X-ray measurements. The mass normalization of the clusters as measured by weak lensing shear is {approx}>25% higher than that measured using SZ and X-ray methods, a difference much larger than the stated measurement errors in the analyses. We constrain the mass-richness relation of the MaxBCG galaxy cluster catalogmore » by measuring the gravitational lensing magnification of type I quasars in the background of the clusters. The magnification is determined using the quasars' variability and the correlation between quasars' variability amplitude and intrinsic luminosity. The mass-richness relation determined through magnification is in agreement with that measured using shear, confirming that the lensing strength of the clusters implies a high mass normalization and that the discrepancy with other methods is not due to a shear-related systematic measurement error. We study the dependence of the measured mass normalization on the cluster halo orientation. As expected, line-of-sight clusters yield a higher normalization; however, this minority of haloes does not significantly bias the average mass-richness relation of the catalog.« less

  4. Comparing diagnostic tests on benefit-risk.

    PubMed

    Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott

    2016-01-01

    Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.

  5. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  6. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  7. The performance of projective standardization for digital subtraction radiography.

    PubMed

    Mol, André; Dunn, Stanley M

    2003-09-01

    We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.

  8. Comparison of Highly Resolved Model-Based Exposure ...

    EPA Pesticide Factsheets

    Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC) during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK), ambient on-road concentration from the Research LINE source dispersion model (R-LINE), a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93%) and individual level (average bias between −10% to 95%). For pollutants with significant contribution from on-road emission (EC and NOx), the on-road based indoor metric performs the best at the population level (error less than 52%). At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%). For PM2.5, due to the relatively low co

  9. Multielevation calibration of frequency-domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.

    2014-01-01

    Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.

  10. Spatial range of illusory effects in Müller-Lyer figures.

    PubMed

    Predebon, J

    2001-11-01

    The spatial range of the illusory effects in Müller-Lyer (M-L) figures was examined in three experiments. Experiments 1 and 2 assessed the pattern of bisection errors along the shaft of the standard or double-angle (experiment 1) and the single-angle (experiment 2) M-L figures: Subjects bisected the shaft and the resulting two half-segments of the shaft to produce apparently equal quarters, and then each of the quarters to produce eight equal-appearing segments. The bisection judgments of each segment were referenced to the segment's physical midpoints. The expansion or wings-out and the contraction or wings-in figures yielded similar patterns of bisection errors. For the standard M-L figures, there were significant errors in bisecting each half, and each end-quarter, but not the two central quarters of the shaft. For the single-angle M-L figures, there were significant errors in bisecting the length of the shaft, the half-segment, and the quarter, of the shaft adjacent to the vertex but not the second quarter from the vertex nor in dividing the half of the shaft at the open end of the figure into four equal intervals. Experiment 3 assessed the apparent length of the half-segment of the shaft at the open end of the single-angle figures. Length judgments were unaffected by the vertex at the opposite end of the shaft. Taken together, the results indicate that the length distortions in both the standard and single-angle M-L figures are not uniformly distributed along the shaft but rather are confined mainly to the quarters adjacent to the vertices. The present findings imply that theories of the M-L illusion which assume uniform expansion or contraction of the shafts are incomplete.

  11. Trueness verification of actual creatinine assays in the European market demonstrates a disappointing variability that needs substantial improvement. An international study in the framework of the EC4 creatinine standardization working group.

    PubMed

    Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro

    2008-01-01

    The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.

  12. Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision

    PubMed Central

    Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.

    2018-01-01

    Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945

  13. A comparison of hydrostatic weighing and air displacement plethysmography in adults with spinal cord injury.

    PubMed

    Clasey, Jody L; Gater, David R

    2005-11-01

    To compare (1) total body volume (V(b)) and density (D(b)) measurements obtained by hydrostatic weighing (HW) and air displacement plethysmography (ADP) in adults with spinal cord injury (SCI); (2) measured and predicted thoracic gas volume (V(TG)); and (3) differences in percentage of fat measurements using ADP-obtained D(b) and HW-obtained D(b) measures that were interchanged in a 4-compartment body composition model (4-comp %fat). Twenty adults with SCI underwent ADP and V(TG), and HW testing. In a subgroup (n=13) of subjects, 4-comp %fat procedures were computed. Research laboratories in a university setting. Twenty adults with SCI below the T3 vertebrae and motor complete paraplegia. Not applicable. Statistical analyses, including determination of group mean differences, shared variance, total error, and 95% confidence intervals. The 2 methods yielded small yet significantly different V(b) and D(b). The groups' mean V(TG) did not differ significantly, but the large relative differences indicated an unacceptable amount of individual error. When the 4-comp %fat measurements were compared, there was a trend toward significant differences (P=.08). ADP is a valid alternative method of determining the V(b) and D(b) in adults with SCI; however, the predicted V(TG) should be used with caution.

  14. Electrochemical sensors applied to pollution monitoring: Measurement error and gas ratio bias - A volcano plume case study

    NASA Astrophysics Data System (ADS)

    Roberts, T. J.; Saffell, J. R.; Oppenheimer, C.; Lurton, T.

    2014-06-01

    There is an increasing scientific interest in the use of miniature electrochemical sensors to detect and quantify atmospheric trace gases. This has led to the development of ‘Multi-Gas' systems applied to measurements of both volcanic gas emissions, and urban air pollution. However, such measurements are subject to uncertainties introduced by sensor response time, a critical issue that has received limited attention to date. Here, a detailed analysis of output from an electrochemical SO2 sensor and two H2S sensors (contrasting in their time responses and cross-sensitivities) demonstrates how instrument errors arise under the conditions of rapidly fluctuating (by dilution) gas abundances, leading to scatter and importantly bias in the reported gas ratios. In a case study at Miyakejima volcano (Japan), electrochemical sensors were deployed at both the crater-rim and downwind locations, thereby exposed to rapidly fluctuating and smoothly varying plume gas concentrations, respectively. Discrepancies in the H2S/SO2 gas mixing ratios derived from these measurements are attributed to the sensors' differing time responses to SO2 and H2S under fluctuating plume conditions, with errors magnified by the need to correct for SO2 interference in the H2S readings. Development of a sensor response model that reproduces sensor t90 behaviour (the time required to reach 90% of the final signal following a step change in gas abundance) during calibration enabled this measurement error to be simulated numerically. The sensor response times were characterised as SO2 sensor (t90 ~ 13 s), H2S sensor without interference (t90 ~ 11 s), and H2S sensor with interference (t90 ~ 20 s to H2S and ~ 32 s to SO2). We show that a method involving data integration between periods of episodic plume exposure identifiable in the sensor output yields a less biased H2S/SO2 ratio estimate than that derived from standard analysis approaches. For the Miyakejima crater-rim dataset this method yields highly correlated H2S and SO2 abundances (R2 > 0.99) and the improved crater-rim data analysis combined with downwind measurements yields H2S/SO2 = 0.11 ± 0.01. Our analysis has significant implications for the reliance that can be placed on ‘Multi-Gas'-derived gas ratios, whether for volcanological or other purposes, in the absence of consideration of the complexities of sensor response times.

  15. Crop Yield Predictions - High Resolution Statistical Model for Intra-season Forecasts Applied to Corn in the US

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2017-12-01

    Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.

  16. Systematic Evaluation of Wajima Superposition (Steady-State Concentration to Mean Residence Time) in the Estimation of Human Intravenous Pharmacokinetic Profile.

    PubMed

    Lombardo, Franco; Berellini, Giuliano; Labonte, Laura R; Liang, Guiqing; Kim, Sean

    2016-03-01

    We present a systematic evaluation of the Wajima superpositioning method to estimate the human intravenous (i.v.) pharmacokinetic (PK) profile based on a set of 54 marketed drugs with diverse structure and range of physicochemical properties. We illustrate the use of average of "best methods" for the prediction of clearance (CL) and volume of distribution at steady state (VDss) as described in our earlier work (Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):178-191; Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):167-177). These methods provided much more accurate prediction of human PK parameters, yielding 88% and 70% of the prediction within 2-fold error for VDss and CL, respectively. The prediction of human i.v. profile using Wajima superpositioning of rat, dog, and monkey time-concentration profiles was tested against the observed human i.v. PK using fold error statistics. The results showed that 63% of the compounds yielded a geometric mean of fold error below 2-fold, and an additional 19% yielded a geometric mean of fold error between 2- and 3-fold, leaving only 18% of the compounds with a relatively poor prediction. Our results showed that good superposition was observed in any case, demonstrating the predictive value of the Wajima approach, and that the cause of poor prediction of human i.v. profile was mainly due to the poorly predicted CL value, while VDss prediction had a minor impact on the accuracy of human i.v. profile prediction. Copyright © 2016. Published by Elsevier Inc.

  17. Forecasting volcanic air pollution in Hawaii: Tests of time series models

    NASA Astrophysics Data System (ADS)

    Reikard, Gordon

    2012-12-01

    Volcanic air pollution, known as vog (volcanic smog) has recently become a major issue in the Hawaiian islands. Vog is caused when volcanic gases react with oxygen and water vapor. It consists of a mixture of gases and aerosols, which include sulfur dioxide and other sulfates. The source of the volcanic gases is the continuing eruption of Mount Kilauea. This paper studies predicting vog using statistical methods. The data sets include time series for SO2 and SO4, over locations spanning the west, south and southeast coasts of Hawaii, and the city of Hilo. The forecasting models include regressions and neural networks, and a frequency domain algorithm. The most typical pattern for the SO2 data is for the frequency domain method to yield the most accurate forecasts over the first few hours, and at the 24 h horizon. The neural net places second. For the SO4 data, the results are less consistent. At two sites, the neural net generally yields the most accurate forecasts, except at the 1 and 24 h horizons, where the frequency domain technique wins narrowly. At one site, the neural net and the frequency domain algorithm yield comparable errors over the first 5 h, after which the neural net dominates. At the remaining site, the frequency domain method is more accurate over the first 4 h, after which the neural net achieves smaller errors. For all the series, the average errors are well within one standard deviation of the actual data at all the horizons. However, the errors also show irregular outliers. In essence, the models capture the central tendency of the data, but are less effective in predicting the extreme events.

  18. Heterogeneous reactions of HNO3(g) + NaCl(s) yields HCl(g) + NaNO3(s) and N2O5(g) + NaCl(s) yields ClNO2(g) + NaNO3(s)

    NASA Technical Reports Server (NTRS)

    Leu, Ming-Taun; Timonen, Raimo S.; Keyser, Leon F.; Yung, Yuk L.

    1995-01-01

    The heterogeneous reactions of HNO3(g) + NaCl(s) yields HCl(g) + NaNO3(s) (eq 1) and N2O5(g) + NaCl(s) yields ClNO2(g) + NaNO3(S) (eq 2) were investigated over the temperature range 223-296 K in a flow-tube reactor coupled to a quadrupole mass spectrometer. Either a chemical ionization mass spectrometer (CIMS) or an electron-impact ionization mass spectrometer (EIMS) was used to provide suitable detection sensitivity and selectivity. In order to mimic atmospheric conditions, partial pressures of HNO3 and N2O5 in the range 6 x 10(exp -8) - 2 x 10(exp -6) Torr were used. Granule sizes and surface roughness of the solid NaCl substrates were determined by using a scanning electron microscope. For dry NaCl substrates, decay rates of HNO3 were used to obtain gamma(1) = 0.013 +/- 0.004 (1sigma) at 296 K and > 0.008 at 223 K, respectively. The error quoted is the statistical error. After all corrections were made, the overall error, including systematic error, was estimated to be about a factor of 2. HCl was found to be the sole gas-phase product of reaction 1. The mechanism changed from heterogeneous reaction to predominantly physical adsorption when the reactor was cooled from 296 to 223 K. For reaction 2 using dry salts, gamma(2) was found to be less than 1.0 x 10(exp -4) at both 223 and 296 K. The gas-phase reaction product was identified as ClNO2 in previous studies using an infrared spectrometer. An enhancement in reaction probability was observed if water was not completely removed from salt surfaces, probably due to the reaction of N2O5(g) + H2O(s) yields 2HNO3(g). Our results are compared with previous literature values obtained using different experimental techniques and conditions. The implications of the present results for the enhancement of the hydrogen chloride column density in the lower stratosphere after the El Chichon volcanic eruption and for the chemistry of HCl and HNO3 in the marine troposphere are discussed.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buras, Andrzej J.; /Munich, Tech. U.; Gorbahn, Martin

    The authors calculate the complete next-to-next-to-leading order QCD corrections to the charm contribution of the rare decay K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}. They encounter several new features, which were absent in lower orders. They discuss them in detail and present the results for the two-loop matching conditions of the Wilson coefficients, the three-loop anomalous dimensions, and the two-loop matrix elements of the relevant operators that enter the next-to-next-to-leading order renormalization group analysis of the Z-penguin and the electroweak box contribution. The inclusion of the next-to-next-to-leading order QCD corrections leads to a significant reduction of the theoretical uncertainty from {+-}more » 9.8% down to {+-} 2.4% in the relevant parameter P{sub c}(X), implying the leftover scale uncertainties in {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) and in the determination of |V{sub td}|, sin 2{beta}, and {gamma} from the K {yields} {pi}{nu}{bar {nu}} system to be {+-} 1.3%, {+-} 1.0%, {+-} 0.006, and {+-} 1.2{sup o}, respectively. For the charm quark {ovr MS} mass m{sub c}(m{sub c}) = (1.30 {+-} 0.05) GeV and |V{sub us}| = 0.2248 the next-to-leading order value P{sub c}(X) = 0.37 {+-} 0.06 is modified to P{sub c}(X) = 0.38 {+-} 0.04 at the next-to-next-to-leading order level with the latter error fully dominated by the uncertainty in m{sub c}(m{sub c}). They present tables for P{sub c}(X) as a function of m{sub c}(m{sub c}) and {alpha}{sub s}(M{sub z}) and a very accurate analytic formula that summarizes these two dependences as well as the dominant theoretical uncertainties. Adding the recently calculated long-distance contributions they find {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) = (8.0 {+-} 1.1) x 10{sup -11} with the present uncertainties in m{sub c}(m{sub c}) and the Cabibbo-Kobayashi-Maskawa elements being the dominant individual sources in the quoted error. They also emphasize that improved calculations of the long-distance contributions to K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}} and of the isospin breaking corrections in the evaluation of the weak current matrix elements from K{sup +} {yields} {pi}{sup 0}e{sup +}{nu} would be valuable in order to increase the potential of the two golden K {yields} {pi}{nu}{bar {nu}} decays in the search for new physics.« less

  20. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    NASA Astrophysics Data System (ADS)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  1. ON THE HUBBLE SPACE TELESCOPE TRIGONOMETRIC PARALLAX OF THE DWARF NOVA SS CYGNI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelan, Edmund P.; Bond, Howard E., E-mail: nelan@stsci.edu, E-mail: heb11@psu.edu

    SS Cygni is one of the brightest dwarf novae (DNe), and one of the best studied prototypes of the cataclysmic variables. Astrometric observations with the Fine Guidance Sensors (FGSs) on the Hubble Space Telescope (HST), published in 2004, gave an absolute trigonometric parallax of 6.06 {+-} 0.44 mas. However, recent very long baseline interferometry (VLBI), obtained during radio outbursts of SS Cyg, has yielded a significantly larger absolute parallax of 8.80 {+-} 0.12 mas, as well as a large difference in the direction of the proper motion (PM) compared to the HST result. The VLBI distance reduces the implied luminositymore » of SS Cyg by about a factor of two, giving good agreement with predictions based on accretion-disk theory in order to explain the observed DN outburst behavior. This discrepancy raises the possibility of significant systematic errors in FGS parallaxes and PMs. We have reanalyzed the archival HST/FGS data, including (1) a critical redetermination of the parallaxes of the background astrometric reference stars, (2) updated input values of the reference-star PMs, and (3) correction of the position measurements for color-dependent shifts. Our new analysis yields a PM of SS Cyg that agrees well with the VLBI motion, and an absolute parallax of 8.30 {+-} 0.41 mas, also statistically concordant with the VLBI result at the {approx}1.2 {sigma} level. Our results suggest that HST/FGS parallaxes are free of large systematic errors, when the data are reduced using high-quality input values for the astrometry of the reference stars, and when instrumental signatures are properly removed.« less

  2. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  3. Performance of a Space-Based Wavelet Compressor for Plasma Count Data on the MMS Fast Plasma Investigation

    NASA Technical Reports Server (NTRS)

    Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.

    2017-01-01

    Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.

  4. Assessing Working Memory in Mild Cognitive Impairment with Serial Order Recall.

    PubMed

    Emrani, Sheina; Libon, David J; Lamar, Melissa; Price, Catherine C; Jefferson, Angela L; Gifford, Katherine A; Hohman, Timothy J; Nation, Daniel A; Delano-Wood, Lisa; Jak, Amy; Bangen, Katherine J; Bondi, Mark W; Brickman, Adam M; Manly, Jennifer; Swenson, Rodney; Au, Rhoda

    2018-01-01

    Working memory (WM) is often assessed with serial order tests such as repeating digits backward. In prior dementia research using the Backward Digit Span Test (BDT), only aggregate test performance was examined. The current research tallied primacy/recency effects, out-of-sequence transposition errors, perseverations, and omissions to assess WM deficits in patients with mild cognitive impairment (MCI). Memory clinic patients (n = 66) were classified into three groups: single domain amnestic MCI (aMCI), combined mixed domain/dysexecutive MCI (mixed/dys MCI), and non-MCI where patients did not meet criteria for MCI. Serial order/WM ability was assessed by asking participants to repeat 7 trials of five digits backwards. Serial order position accuracy, transposition errors, perseverations, and omission errors were tallied. A 3 (group)×5 (serial position) repeated measures ANOVA yielded a significant group×trial interaction. Follow-up analyses found attenuation of the recency effect for mixed/dys MCI patients. Mixed/dys MCI patients scored lower than non-MCI patients for serial position 3 (p < 0.003) serial position 4 (p < 0.002); and lower than both group for serial position 5 (recency; p < 0.002). Mixed/dys MCI patients also produced more transposition errors than both groups (p < 0.010); and more omissions (p < 0.020), and perseverations errors (p < 0.018) than non-MCI patients. The attenuation of a recency effect using serial order parameters obtained from the BDT may provide a useful operational definition as well as additional diagnostic information regarding working memory deficits in MCI.

  5. Quantifying differences in the impact of variable chemistry on equilibrium uranium(VI) adsorption properties of aquifer sediments

    USGS Publications Warehouse

    Stoliker, Deborah L.; Kent, Douglas B.; Zachara, John M.

    2011-01-01

    Uranium adsorption-desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500-1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO22+ + 2CO32- = >SOUO2(CO3HCO3)2-, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logKc) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logKc values. Using this approach, logKc values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (Kc uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors.

  6. PAPER-CHROMATOGRAM MEASUREMENT OF SUBSTANCES LABELLED WITH H$sup 3$ (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, M.

    1961-03-01

    Compounds labelled with H/sup 3/ can be detected with a paper chromatogram using a methane flow counter with a count yield of 1%. The yield can be estimated from the beta maximum energy. A new double counter was developed which increases the count yield to 2% and also considerably decreases the margin of error. Calibration curves with leucine and glucosamine show satisfactory linearity between measured and applied activity in the range from 4 to 50 x 10/sup -//sup 3/ mu c of H/sup 3/. (auth)

  7. Explosion yield estimation from pressure wave template matching

    PubMed Central

    Arrowsmith, Stephen; Bowman, Daniel

    2017-01-01

    A method for estimating the yield of explosions from shock-wave and acoustic-wave measurements is presented. The method exploits full waveforms by comparing pressure measurements against an empirical stack of prior observations using scaling laws. The approach can be applied to measurements across a wide-range of source-to-receiver distances. The method is applied to data from two explosion experiments in different regions, leading to mean relative errors in yield estimates of 0.13 using prior data from the same region, and 0.2 when applied to a new region. PMID:28618805

  8. Effect of retinal defocus on basketball free throw shooting performance.

    PubMed

    Bulson, Ryan C; Ciuffreda, Kenneth J; Hayes, John; Ludlam, Diana P

    2015-07-01

    Vision plays a critical role in athletic performance; however, previous studies have demonstrated that a variety of simulated athletic sensorimotor tasks can be surprisingly resilient to retinal defocus (blurred vision). The purpose of the present study was to extend this work to determine the effect of retinal defocus on overall basketball free throw performance, as well as for the factors gender, refractive error and experience. Forty-four young adult participants of both genders were recruited. They had a range of refractive errors and basketball experience. Each performed 20 standard basketball free throws under five lens defocus conditions in a randomised manner: plano, +1.50 D, +3.00 D, +4.50 D and +10.00 D. Overall, free throw performance was significantly reduced under the +10.00 D lens defocus condition only. Previous experience, but neither refractive error nor gender, yielded a statistically significant difference in performance. Consistent with previous studies of complex sensorimotor tasks, basketball free throw performance was resilient to low and moderate levels of retinal defocus. Thus, for a relatively non-dynamic motor task at a fixed far distance, such as the basketball free throw, precise visual clarity was not critical. Other factors such as motor memory may be important. However, in the dynamic athletic competitive environment it is likely that visual clarity plays a more critical role in one's performance level, at least for specific task demands. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  9. The compression–error trade-off for large gridded data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silver, Jeremy D.; Zender, Charles S.

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  10. The compression–error trade-off for large gridded data sets

    DOE PAGES

    Silver, Jeremy D.; Zender, Charles S.

    2017-01-27

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  11. LACIE--An Application of Meteorology for United States and Foreign Wheat Assessment.

    NASA Astrophysics Data System (ADS)

    Hill, Jerry D.; Strommen, Norton D.; Sakamoto, Clarence M.; Leduc, Sharon K.

    1980-01-01

    The development of a critical world food situation during the early 1970's was the background leading to the Large Area Crop Inventory Experiment (LACIE). The need was to develop a capability for timely monitoring of crops on a global scale. Three U.S. Government agencies, NASA, NOAA and USDA, undertook the task of developing technology to extract the crop-related information available from the global weather-reporting network and the Landsat satellite. This paper describes the overall LACIE technical approach to make a quasi-operational application of existing research results and the accomplishments of this cooperative experiment in utilizing the weather information.Using available agrometeorological data, techniques were implemented to estimate crop development, assess relative crop vigor and estimate yield for wheat, the crop of principal interest to the experiment. Global weather data were utilized in preparing timely yield estimates for selected areas of the U.S. Great Plains, the U.S.S.R. and Canada. Additionally, wheat yield models were developed and pilot tested for Brazil, Australia, India and Argentina. The results of the work show that heading dates for wheat in North America can be predicted with an average absolute error of about 5 days for winter wheat and 4 days for spring wheat. Independent tests of wheat yield models over a 10-year period for the U.S. Great Plains produced a root-mean-square error of 1.12 quintals per hectare (q ha1) while similar tests in the U.S.S.R. produced an error of 1.31 q ha1. Research designed to improve the initial capability is described as is the rationale for further evolution of a capability to monitor global climate and assess its impact on world food supplies.

  12. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  13. A second-order 3D electromagnetics algorithm for curved interfaces between anisotropic dielectrics on a Yee mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.

    A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less

  14. False consensus and adolescent peer contagion: examining discrepancies between perceptions and actual reported levels of friends' deviant and health risk behaviors.

    PubMed

    Prinstein, Mitchell J; Wang, Shirley S

    2005-06-01

    Adolescents' perceptions of their friends' behavior strongly predict adolescents' own behavior, however, these perceptions often are erroneous. This study examined correlates of discrepancies between adolescents' perceptions and friends' reports of behavior. A total of 120 11th-grade adolescents provided data regarding their engagement in deviant and health risk behaviors, as well as their perceptions of the behavior of their best friend, as identified through sociometric assessment. Data from friends' own report were used to calculate discrepancy measures of adolescents' overestimations and estimation errors (absolute value of discrepancies) of friends' behavior. Adolescents also completed a measure of friendship quality, and a sociometric assessment yielding measures of peer acceptance/rejection and aggression. Findings revealed that adolescents' peer rejection and aggression were associated with greater overestimations of friends' behavior. This effect was partially mediated by adolescents' own behavior, consistent with a false consensus effect. Low levels of positive friendship quality were significantly associated with estimation errors, but not overestimations specifically.

  15. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Patra, Rusha; Dutta, Pranab K.

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  16. Objective sea level pressure analysis for sparse data areas

    NASA Technical Reports Server (NTRS)

    Druyan, L. M.

    1972-01-01

    A computer procedure was used to analyze the pressure distribution over the North Pacific Ocean for eleven synoptic times in February, 1967. Independent knowledge of the central pressures of lows is shown to reduce the analysis errors for very sparse data coverage. The application of planned remote sensing of sea-level wind speeds is shown to make a significant contribution to the quality of the analysis especially in the high gradient mid-latitudes and for sparse coverage of conventional observations (such as over Southern Hemisphere oceans). Uniform distribution of the available observations of sea-level pressure and wind velocity yields results far superior to those derived from a random distribution. A generalization of the results indicates that the average lower limit for analysis errors is between 2 and 2.5 mb based on the perfect specification of the magnitude of the sea-level pressure gradient from a known verification analysis. A less than perfect specification will derive from wind-pressure relationships applied to satellite observed wind speeds.

  17. Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.

    PubMed

    Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B

    2011-02-01

    This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.

  18. Passive measurement-device-independent quantum key distribution with orbital angular momentum and pulse position modulation

    NASA Astrophysics Data System (ADS)

    Wang, Lian; Zhou, Yuan-yuan; Zhou, Xue-jun; Chen, Xiao

    2018-03-01

    Based on the orbital angular momentum and pulse position modulation, we present a novel passive measurement-device-independent quantum key distribution (MDI-QKD) scheme with the two-mode source. Combining with the tight bounds of the yield and error rate of single-photon pairs given in our paper, we conduct performance analysis on the scheme with heralded single-photon source. The numerical simulations show that the performance of our scheme is significantly superior to the traditional MDI-QKD in the error rate, key generation rate and secure transmission distance, since the application of orbital angular momentum and pulse position modulation can exclude the basis-dependent flaw and increase the information content for each single photon. Moreover, the performance is improved with the rise of the frame length. Therefore, our scheme, without intensity modulation, avoids the source side channels and enhances the key generation rate. It has greatly utility value in the MDI-QKD setups.

  19. Analysis of Factors Influencing Measurement Accuracy of Al Alloy Tensile Test Results

    NASA Astrophysics Data System (ADS)

    Podgornik, Bojan; Žužek, Borut; Sedlaček, Marko; Kevorkijan, Varužan; Hostej, Boris

    2016-02-01

    In order to properly use materials in design, a complete understanding of and information on their mechanical properties, such as yield and ultimate tensile strength must be obtained. Furthermore, as the design of automotive parts is constantly pushed toward higher limits, excessive measuring uncertainty can lead to unexpected premature failure of the component, thus requiring reliable determination of material properties with low uncertainty. The aim of the present work was to evaluate the effect of different metrology factors, including the number of tested samples, specimens machining and surface quality, specimens input diameter, type of testing and human error on the tensile test results and measurement uncertainty when performed on 2xxx series Al alloy. Results show that the most significant contribution to measurement uncertainty comes from the number of samples tested, which can even exceed 1 %. Furthermore, moving from experimental laboratory conditions to very intense industrial environment further amplifies measurement uncertainty, where even if using automated systems human error cannot be neglected.

  20. A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield

    NASA Astrophysics Data System (ADS)

    Kazama, Yoriko; Kujirai, Toshihiro

    2014-10-01

    A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.

  1. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  2. The use of a tablet computer to complete the DASH questionnaire.

    PubMed

    Dy, Christopher J; Schmicker, Thomas; Tran, Quynh; Chadwick, Brian; Daluiski, Aaron

    2012-12-01

    To determine whether electronic self-administration of the Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaire using a tablet computer increased completion rate compared with paper self-administration. We gave the DASH in self-administered paper form to 222 new patients in a single hand surgeon's practice. After a washout period of 5 weeks, we gave the DASH in self-administered tablet computer form to 264 new patients. A maximum of 3 questions could be omitted before the questionnaire was considered unscorable. We reviewed the submitted surveys to determine the number of scorable questionnaires and the number of omitted questions in each survey. We completed univariate analysis and regression modeling to determine the influence of survey administration type on respondent error while controlling for patient age and sex. Of the 486 total surveys, 60 (12%) were not scorable. A significantly higher proportion of the paper surveys (24%) were unscorable compared with electronic surveys (2%), with significantly more questions omitted in each paper survey (2.6 ± 4.4 questions) than in each electronic survey (0.1 ± 0.8 questions). Logistic regression analysis revealed survey administration mode to be significantly associated with DASH scorability while controlling for age and sex, with electronic survey administration being 14 times more likely than paper administration to yield a scorable DASH. In our retrospective series, electronic self-administration of the DASH decreased the number of omitted questions and yielded a higher number of scorable questionnaires. Prospective, randomized evaluation is needed to better delineate the effect of survey administration on respondent error. Administration of the DASH with a tablet computer may be beneficial for both clinical and research endeavors to increase completion rate and to gain other benefits from electronic data capture. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  3. MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays.

    PubMed

    Li, Hui; Zhu, Yitan; Burnside, Elizabeth S; Drukker, Karen; Hoadley, Katherine A; Fan, Cheng; Conzen, Suzanne D; Whitman, Gary J; Sutton, Elizabeth J; Net, Jose M; Ganott, Marie; Huang, Erich; Morris, Elizabeth A; Perou, Charles M; Ji, Yuan; Giger, Maryellen L

    2016-11-01

    Purpose To investigate relationships between computer-extracted breast magnetic resonance (MR) imaging phenotypes with multigene assays of MammaPrint, Oncotype DX, and PAM50 to assess the role of radiomics in evaluating the risk of breast cancer recurrence. Materials and Methods Analysis was conducted on an institutional review board-approved retrospective data set of 84 deidentified, multi-institutional breast MR examinations from the National Cancer Institute Cancer Imaging Archive, along with clinical, histopathologic, and genomic data from The Cancer Genome Atlas. The data set of biopsy-proven invasive breast cancers included 74 (88%) ductal, eight (10%) lobular, and two (2%) mixed cancers. Of these, 73 (87%) were estrogen receptor positive, 67 (80%) were progesterone receptor positive, and 19 (23%) were human epidermal growth factor receptor 2 positive. For each case, computerized radiomics of the MR images yielded computer-extracted tumor phenotypes of size, shape, margin morphology, enhancement texture, and kinetic assessment. Regression and receiver operating characteristic analysis were conducted to assess the predictive ability of the MR radiomics features relative to the multigene assay classifications. Results Multiple linear regression analyses demonstrated significant associations (R 2 = 0.25-0.32, r = 0.5-0.56, P < .0001) between radiomics signatures and multigene assay recurrence scores. Important radiomics features included tumor size and enhancement texture, which indicated tumor heterogeneity. Use of radiomics in the task of distinguishing between good and poor prognosis yielded area under the receiver operating characteristic curve values of 0.88 (standard error, 0.05), 0.76 (standard error, 0.06), 0.68 (standard error, 0.08), and 0.55 (standard error, 0.09) for MammaPrint, Oncotype DX, PAM50 risk of relapse based on subtype, and PAM50 risk of relapse based on subtype and proliferation, respectively, with all but the latter showing statistical difference from chance. Conclusion Quantitative breast MR imaging radiomics shows promise for image-based phenotyping in assessing the risk of breast cancer recurrence. © RSNA, 2016 Online supplemental material is available for this article.

  4. Development of a traveltime prediction equation for streams in Arkansas

    USGS Publications Warehouse

    Funkhouser, Jaysson E.; Barks, C. Shane

    2004-01-01

    During 1971 and 1981 and 2001 and 2003, traveltime measurements were made at 33 sample sites on 18 streams throughout northern and western Arkansas using fluorescent dye. Most measurements were made during steady-state base-flow conditions with the exception of three measurements made during near steady-state medium-flow conditions (for the study described in this report, medium-flow is approximately 100-150 percent of the mean monthly streamflow during the month the dye trace was conducted). These traveltime data were compared to the U.S. Geological Survey?s national traveltime prediction equation and used to develop a specific traveltime prediction equation for Arkansas streams. In general, the national traveltime prediction equation yielded results that over-predicted the velocity of the streams for 29 of the 33 sites measured. The standard error for the national traveltime prediction equation was 105 percent. The coefficient of determination was 0.78. The Arkansas prediction equation developed from a regression analysis of dye-tracing results was a significant improvement over the national prediction equation. This regression analysis yielded a standard error of 46 percent and a coefficient of determination of 0.74. The predicted velocities using this equation compared better to measured velocities. Using the variables in a regression analysis, the Arkansas prediction equation derived for the peak velocity in feet per second was: (Actual Equation Shown in report) In addition to knowing when the peak concentration will arrive at a site, it is of great interest to know when the leading edge of a contaminant plume will arrive. The traveltime of the leading edge of a contaminant plume indicates when a potential problem might first develop and also defines the overall shape of the concentration response function. Previous USGS reports have shown no significant relation between any of the variables and the time from injection to the arrival of the leading edge of the dye plume. For this report, the analysis of the dye-tracing data yielded a significant correlation between traveltime of the leading edge and traveltime of the peak concentration with an R2 value of 0.99. These data indicate that the traveltime of the leading edge can be estimated from: (Actual Equation Shown in Report)

  5. Effects of parallel planning on agreement production.

    PubMed

    Veenstra, Alma; Meyer, Antje S; Acheson, Daniel J

    2015-11-01

    An important issue in current psycholinguistics is how the time course of utterance planning affects the generation of grammatical structures. The current study investigated the influence of parallel activation of the components of complex noun phrases on the generation of subject-verb agreement. Specifically, the lexical interference account (Gillespie & Pearlmutter, 2011b; Solomon & Pearlmutter, 2004) predicts more agreement errors (i.e., attraction) for subject phrases in which the head and local noun mismatch in number (e.g., the apple next to the pears) when nouns are planned in parallel than when they are planned in sequence. We used a speeded picture description task that yielded sentences such as the apple next to the pears is red. The objects mentioned in the noun phrase were either semantically related or unrelated. To induce agreement errors, pictures sometimes mismatched in number. In order to manipulate the likelihood of parallel processing of the objects and to test the hypothesized relationship between parallel processing and the rate of agreement errors, the pictures were either placed close together or far apart. Analyses of the participants' eye movements and speech onset latencies indicated slower processing of the first object and stronger interference from the related (compared to the unrelated) second object in the close than in the far condition. Analyses of the agreement errors yielded an attraction effect, with more errors in mismatching than in matching conditions. However, the magnitude of the attraction effect did not differ across the close and far conditions. Thus, spatial proximity encouraged parallel processing of the pictures, which led to interference of the associated conceptual and/or lexical representation, but, contrary to the prediction, it did not lead to more attraction errors. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. An analysis of cropland mask choice and ancillary data for annual corn yield forecasting using MODIS data

    NASA Astrophysics Data System (ADS)

    Shao, Yang; Campbell, James B.; Taff, Gregory N.; Zheng, Baojuan

    2015-06-01

    The Midwestern United States is one of the world's most important corn-producing regions. Monitoring and forecasting of corn yields in this intensive agricultural region are important activities to support food security, commodity markets, bioenergy industries, and formation of national policies. This study aims to develop forecasting models that have the capability to provide mid-season prediction of county-level corn yields for the entire Midwestern United States. We used multi-temporal MODIS NDVI (normalized difference vegetation index) 16-day composite data as the primary input, with digital elevation model (DEM) and parameter-elevation relationships on independent slopes model (PRISM) climate data as additional inputs. The DEM and PRISM data, along with three types of cropland masks were tested and compared to evaluate their impacts on model predictive accuracy. Our results suggested that the use of general cropland masks (e.g., summer crop or cultivated crops) generated similar results compared with use of an annual corn-specific mask. Leave-one-year-out cross-validation resulted in an average R2 of 0.75 and RMSE value of 1.10 t/ha. Using a DEM as an additional model input slightly improved performance, while inclusion of PRISM climate data appeared not to be important for our regional corn-yield model. Furthermore, our model has potential for real-time/early prediction. Our corn yield esitmates are available as early as late July, which is an improvement upon previous corn-yield prediction models. In addition to annual corn yield forecasting, we examined model uncertainties through spatial and temporal analysis of the model's predictive error distribution. The magnitude of predictive error (by county) appears to be associated with the spatial patterns of corn fields in the study area.

  7. Hybrid computer technique yields random signal probability distributions

    NASA Technical Reports Server (NTRS)

    Cameron, W. D.

    1965-01-01

    Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.

  8. Developing a scalable model of recombinant protein yield from Pichia pastoris: the influence of culture conditions, biomass and induction regime

    PubMed Central

    Holmes, William J; Darby, Richard AJ; Wilks, Martin DB; Smith, Rodney; Bill, Roslyn M

    2009-01-01

    Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time. PMID:19570229

  9. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    NASA Astrophysics Data System (ADS)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal cavity yielded an overestimation in cerebellum up to 5%. ConclusionsThe present error analysis confirms that our template-based attenuation method provides reliable attenuation corrections of PET brain imaging measured in PET/MR scanners.

  10. The use of one- versus two-tailed tests to evaluate prevention programs.

    PubMed

    Ringwalt, Chris; Paschall, M J; Gorman, Dennis; Derzon, James; Kinlaw, Alan

    2011-06-01

    Investigators have used both one- and two-tailed tests to determine the significance of findings yielded by program evaluations. While the literature that addresses the appropriate use of each type of significance test should be used is historically inconsistent, almost all authorities now agree that one-tailed tests are rarely (if ever) appropriate. A review of 85 published evaluations of school-based drug prevention curricula specified on the National Registry of Effective Programs and Practices revealed that 20% employed one-tailed tests and, within this subgroup, an additional 4% also employed two-tailed tests. The majority of publications either did not specify the type of statistical test employed or used some other criterion such as effect sizes or confidence intervals. Evaluators reported that they used one-tailed tests either because they stipulated the direction of expected findings in advance, or because prior evaluations of similar programs had yielded no negative results. The authors conclude that one-tailed tests should never be used because they introduce greater potential for Type I errors and create an uneven playing field when outcomes are compared across programs. The authors also conclude that the traditional threshold of significance that places α at .05 is arbitrary and obsolete, and that evaluators should consistently report the exact p values they find.

  11. Optimization of spherical facets for parabolic solar concentrators

    NASA Technical Reports Server (NTRS)

    White, J. E.; Erikson, R. J.; Sturgis, J. D.; Elfe, T. B.

    1986-01-01

    Solar concentrator designs which employ deployable hexagonal panels are being developed for space power systems. An offset optical configuration has been developed which offers significant system level advantages over previously proposed collector designs for space applications. Optical analyses have been performed which show offset reflector intercept factors to be only slightly lower than those for symmetric reflectors with the same slope error. Fluxes on the receiver walls are asymmetric but manageable by varying the tilt angle of the receiver. Greater producibility is achieved by subdividing the hexagonal panels into triangular mirror facets of spherical contour. Optical analysis has been performed upon these to yield near-optimum sizes and radii.

  12. Representing winter wheat in the Community Land Model (version 4.5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less

  13. Satellite-based assessment of yield variation and its determinants in smallholder African systems

    PubMed Central

    Lobell, David B.

    2017-01-01

    The emergence of satellite sensors that can routinely observe millions of individual smallholder farms raises possibilities for monitoring and understanding agricultural productivity in many regions of the world. Here we demonstrate the potential to track smallholder maize yield variation in western Kenya, using a combination of 1-m Terra Bella imagery and intensive field sampling on thousands of fields over 2 y. We find that agreement between satellite-based and traditional field survey-based yield estimates depends significantly on the quality of the field-based measures, with agreement highest (R2 up to 0.4) when using precise field measures of plot area and when using larger fields for which rounding errors are smaller. We further show that satellite-based measures are able to detect positive yield responses to fertilizer and hybrid seed inputs and that the inferred responses are statistically indistinguishable from estimates based on survey-based yields. These results suggest that high-resolution satellite imagery can be used to make predictions of smallholder agricultural productivity that are roughly as accurate as the survey-based measures traditionally used in research and policy applications, and they indicate a substantial near-term potential to quickly generate useful datasets on productivity in smallholder systems, even with minimal or no field training data. Such datasets could rapidly accelerate learning about which interventions in smallholder systems have the most positive impact, thus enabling more rapid transformation of rural livelihoods. PMID:28202728

  14. Representing winter wheat in the Community Land Model (version 4.5)

    NASA Astrophysics Data System (ADS)

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.

    2017-05-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.

  15. Representing winter wheat in the Community Land Model (version 4.5)

    DOE PAGES

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; ...

    2017-05-05

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less

  16. Satellite-based assessment of yield variation and its determinants in smallholder African systems.

    PubMed

    Burke, Marshall; Lobell, David B

    2017-02-28

    The emergence of satellite sensors that can routinely observe millions of individual smallholder farms raises possibilities for monitoring and understanding agricultural productivity in many regions of the world. Here we demonstrate the potential to track smallholder maize yield variation in western Kenya, using a combination of 1-m Terra Bella imagery and intensive field sampling on thousands of fields over 2 y. We find that agreement between satellite-based and traditional field survey-based yield estimates depends significantly on the quality of the field-based measures, with agreement highest ([Formula: see text] up to 0.4) when using precise field measures of plot area and when using larger fields for which rounding errors are smaller. We further show that satellite-based measures are able to detect positive yield responses to fertilizer and hybrid seed inputs and that the inferred responses are statistically indistinguishable from estimates based on survey-based yields. These results suggest that high-resolution satellite imagery can be used to make predictions of smallholder agricultural productivity that are roughly as accurate as the survey-based measures traditionally used in research and policy applications, and they indicate a substantial near-term potential to quickly generate useful datasets on productivity in smallholder systems, even with minimal or no field training data. Such datasets could rapidly accelerate learning about which interventions in smallholder systems have the most positive impact, thus enabling more rapid transformation of rural livelihoods.

  17. Unexpected role of activated carbon in promoting transformation of secondary amines to N-nitrosamines.

    PubMed

    Padhye, Lokesh; Wang, Pei; Karanfil, Tanju; Huang, Ching-Hua

    2010-06-01

    Activated carbon (AC) is the most common solid phase extraction material used for analysis of nitrosamines in water. It is also widely used for the removal of organics in water treatment and as a catalyst or catalyst support in some industrial applications. In this study, it was discovered that AC materials can catalyze transformation of secondary amines to yield trace levels of N-nitrosamines under ambient aerobic conditions. All 11 commercial ACs tested in the study formed nitrosamines from secondary amines. Among the different ACs, the N-nitrosodimethylamine (NDMA) yield at pH 7.5 ranged from 0.001% to 0.01% of initial amount of aqueous dimethylamine (DMA) concentration, but at 0.05-0.29% of the amount of adsorbed DMA by AC. Nitrosamine yield increased with higher pH and for higher molecular weight secondary amines, probably because of increased adsorption of amines. Presence of oxygen was a critical factor in the transformation of secondary amines, since ACs with adsorbed secondary amines dried under air for longer period of time exhibited significantly higher nitrosamine yields. The AC-catalyzed nitrosamine formation was also observed in surface water and wastewater effluent samples. Properties of AC play an important role in the nitrosamine yields. Preliminary evaluation indicated that nitrosamine formation was higher on reduced than oxidized AC surfaces. Overall, the study results show that selecting ACs and reaction conditions are important to minimize analytical errors and undesirable formation associated with nitrosamines in water samples.

  18. An evaluation of three growth and yield simulators for even-aged hardwood forests of the mid-Appalachian region

    Treesearch

    John R. Brooks; Gary W. Miller

    2011-01-01

    Data from even-aged hardwood stands in four ecoregions across the mid-Appalachian region were used to test projection accuracy for three available growth and yield software systems: SILVAH, the Forest Vegetation Simulator, and the Stand Damage Model. Average root mean squared error (RMSE) ranged from 20 to 140 percent of actual trees per acre while RMSE ranged from 2...

  19. Remote Estimation of Vegetation Fraction and Yield in Oilseed Rape with Unmanned Aerial Vehicle Data

    NASA Astrophysics Data System (ADS)

    Peng, Y.; Fang, S.; Liu, K.; Gong, Y.

    2017-12-01

    This study developed an approach for remote estimation of Vegetation Fraction (VF) and yield in oilseed rape, which is a crop species with conspicuous flowers during reproduction. Canopy reflectance in green, red, red edge and NIR bands was obtained by a camera system mounted on an unmanned aerial vehicle (UAV) when oilseed rape was in the vegetative growth and flowering stage. The relationship of several widely-used Vegetation Indices (VI) vs. VF was tested and found to be different in different phenology stages. At the same VF when oilseed rape was flowering, canopy reflectance increased in all bands, and the tested VI decreased. Therefore, two algorithms to estimate VF were calibrated respectively, one for samples during vegetative growth and the other for samples during flowering stage. During the flowering season, we also explored the potential of using canopy reflectance or VIs to estimate Flower Fraction (FF) in oilseed rape. Based on FF estimates, rape yield can be estimated using canopy reflectance data. Our model was validated in oilseed rape planted under different nitrogen fertilization applications and in different phenology stages. The results showed that it was able to predict VF and FF accurately in oilseed rape with estimation error below 6% and predict yield with estimation error below 20%.

  20. Evaluating the capabilities of watershed-scale models in estimating sediment yield at field-scale.

    PubMed

    Sommerlot, Andrew R; Nejadhashemi, A Pouyan; Woznicki, Sean A; Giri, Subhasis; Prohaska, Michael D

    2013-09-30

    Many watershed model interfaces have been developed in recent years for predicting field-scale sediment loads. They share the goal of providing data for decisions aimed at improving watershed health and the effectiveness of water quality conservation efforts. The objectives of this study were to: 1) compare three watershed-scale models (Soil and Water Assessment Tool (SWAT), Field_SWAT, and the High Impact Targeting (HIT) model) against calibrated field-scale model (RUSLE2) in estimating sediment yield from 41 randomly selected agricultural fields within the River Raisin watershed; 2) evaluate the statistical significance among models; 3) assess the watershed models' capabilities in identifying areas of concern at the field level; 4) evaluate the reliability of the watershed-scale models for field-scale analysis. The SWAT model produced the most similar estimates to RUSLE2 by providing the closest median and the lowest absolute error in sediment yield predictions, while the HIT model estimates were the worst. Concerning statistically significant differences between models, SWAT was the only model found to be not significantly different from the calibrated RUSLE2 at α = 0.05. Meanwhile, all models were incapable of identifying priorities areas similar to the RUSLE2 model. Overall, SWAT provided the most correct estimates (51%) within the uncertainty bounds of RUSLE2 and is the most reliable among the studied models, while HIT is the least reliable. The results of this study suggest caution should be exercised when using watershed-scale models for field level decision-making, while field specific data is of paramount importance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Evaluation of Thompson-type trend and monthly weather data models for corn yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    An evaluation was made of Thompson-Type models which use trend terms (as a surrogate for technology), meteorological variables based on monthly average temperature, and total precipitation to forecast and estimate corn yields in Iowa, Illinois, and Indiana. Pooled and unpooled Thompson-type models were compared. Neither was found to be consistently superior to the other. Yield reliability indicators show that the models are of limited use for large area yield estimation. The models are objective and consistent with scientific knowledge. Timely yield forecasts and estimates can be made during the growing season by using normals or long range weather forecasts. The models are not costly to operate and are easy to use and understand. The model standard errors of prediction do not provide a useful current measure of modeled yield reliability.

  2. Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications

    NASA Astrophysics Data System (ADS)

    Liang, C.; Yu, Y.

    2017-12-01

    The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (<20m) and angular errors of P axes (within 10 degree, for S/N>0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.

  3. Effects of room environment and nursing experience on clinical blood pressure measurement: an observational study.

    PubMed

    Zhang, Meng; Zhang, Xuemei; Chen, Fei; Dong, Birong; Chen, Aiqing; Zheng, Dingchang

    2017-04-01

    This study aimed to examine the effects of measurement room environment and nursing experience on the accuracy of manual auscultatory blood pressure (BP) measurement. A training database with 32 Korotkoff sounds recordings from the British Hypertension Society was played randomly to 20 observers who were divided into four groups according to the years of their nursing experience (i.e. ≥10 years, 1-9 years, nursing students with frequent training, and those without any medical background; five observers in each group). All the observers were asked to determine manual auscultatory systolic blood pressure (SBP) and diastolic blood pressure (DBP) both in a quiet clinical assessment room and in a noisy nurse station area. This procedure was repeated on another day, yielding a total of four measurements from each observer (i.e. two room environments and two repeated determinations on 2 separate days) for each Korotkoff sound. The measurement error was then calculated against the reference answer, with the effects of room environment and nursing experience of the observer investigated. Our results showed that there was no statistically significant difference for BPs measured under both quiet and noisy environments (P>0.80 for both SBP and DBP). However, there was a significant effect on the measurement accuracy between the observer groups (P<0.001 for both SBP and DBP). The nursing students performed best with overall SBP and DBP errors of -0.8±2.4 and 0.1±1.8 mmHg, respectively. The SBP measurement error from the nursing students was significantly smaller than that for each of the other three groups (all P<0.001). Our results indicate that frequent nursing trainings are important for nurses to achieve accurate manual auscultatory BP measurement.

  4. First measurements of J/{psi} decays into {sigma}{sup +}{sigma}{sup -} and {xi}{sup 0}{xi}{sup 0}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablikim, M.; Bai, J. Z.; Bai, Y.

    Based on 58x10{sup 6} J/{psi} events collected with the BESII detector at the Beijing Electron-Positron Collider, the baryon pair processes J/{psi}{yields}{sigma}{sup +}{sigma}{sup -} and J/{psi}{yields}{xi}{sup 0}{xi}{sup 0} are observed for the first time. The branching fractions are measured to be B(J/{psi}{yields}{sigma}{sup +}{sigma}{sup -})=(1.50{+-}0.10{+-}0.22)x10{sup -3} and B(J/{psi}{yields}{xi}{sup 0}{xi}{sup 0})=(1.20{+-}0.12{+-}0.21)x10{sup -3}, where the first errors are statistical and the second ones are systematic.

  5. LACIE performance predictor FOC users manual

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.

  6. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  7. Half-lives of 214Pb and 214Bi.

    PubMed

    Martz, D E; Langner, G H; Johnson, P R

    1991-10-01

    New measurements on chemically separated samples of 214Bi have yielded a mean half-life value of 19.71 +/- 0.02 min, where the error quoted is twice the standard deviation of the mean based on 23 decay runs. This result provides strong support for the historic 19.72 +/- 0.04 min half-life value and essentially excludes the 19.9-min value, both reported in previous studies. New measurements of the decay rate of 222Rn progeny activity initially in radioactive equilibrium have yielded a value of 26.89 +/- 0.03 min for the half-life of 214Pb, where the error quoted is twice the standard deviation of the mean based on 12 decay runs. This value is 0.1 min longer than the currently accepted 214Pb half-value of 26.8 min.

  8. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  9. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  10. Improvement of operational stability of Ogataea minuta carbonyl reductase for chiral alcohol production.

    PubMed

    Honda, Kohsuke; Inoue, Mizuha; Ono, Tomohiro; Okano, Kenji; Dekishima, Yasumasa; Kawabata, Hiroshi

    2017-06-01

    Directed evolution of enantio-selective carbonyl reductase from Ogataea minuta was conducted to improve the operational stability of the enzyme. A mutant library was constructed by an error-prone PCR and screened using a newly developed colorimetric assay. The stability of a mutant with two amino acid substitutions was significantly higher than that of the wild type at 50°C in the presence of dimethyl sulfoxide. Site-directed mutagenesis analysis showed that the improved stability of the enzyme can be attributed to the amino acid substitution of V166A. The half-lives of the V166A mutant were 11- and 6.1-times longer than those of the wild type at 50°C in the presence and absence, respectively, of 20% (v/v) dimethyl sulfoxide. No significant differences in the substrate specificity and enantio-selectivity of the enzyme were observed. The mutant enzyme converted 60 mM 2,2,2-trifluoroacetophenone to (R)-(-)-α-(trifluoromethyl)benzyl alcohol in a molar yield of 71% whereas the conversion yield with an equivalent concentration of the wild-type enzyme was 27%. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  11. Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy

    PubMed Central

    2011-01-01

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685

  12. A new convergence analysis and perturbation resilience of some accelerated proximal forward-backward algorithms with errors

    NASA Astrophysics Data System (ADS)

    Reem, Daniel; De Pierro, Alvaro

    2017-04-01

    Many problems in science and engineering involve, as part of their solution process, the consideration of a separable function which is the sum of two convex functions, one of them possibly non-smooth. Recently a few works have discussed inexact versions of several accelerated proximal methods aiming at solving this minimization problem. This paper shows that inexact versions of a method of Beck and Teboulle (fast iterative shrinkable tresholding algorithm) preserve, in a Hilbert space setting, the same (non-asymptotic) rate of convergence under some assumptions on the decay rate of the error terms The notion of inexactness discussed here seems to be rather simple, but, interestingly, when comparing to related works, closely related decay rates of the errors terms yield closely related convergence rates. The derivation sheds some light on the somewhat mysterious origin of some parameters which appear in various accelerated methods. A consequence of the analysis is that the accelerated method is perturbation resilient, making it suitable, in principle, for the superiorization methodology. By taking this into account, we re-examine the superiorization methodology and significantly extend its scope. This work was supported by FAPESP 2013/19504-9. The second author was supported also by CNPq grant 306030/2014-4.

  13. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    PubMed

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  14. Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.

    PubMed

    Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N

    2011-04-15

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.

  15. Too True to be Bad: When Sets of Studies With Significant and Nonsignificant Findings Are Probably True.

    PubMed

    Lakens, Daniël; Etz, Alexander J

    2017-11-01

    Psychology journals rarely publish nonsignificant results. At the same time, it is often very unlikely (or "too good to be true") that a set of studies yields exclusively significant results. Here, we use likelihood ratios to explain when sets of studies that contain a mix of significant and nonsignificant results are likely to be true or "too true to be bad." As we show, mixed results are not only likely to be observed in lines of research but also, when observed, often provide evidence for the alternative hypothesis, given reasonable levels of statistical power and an adequately controlled low Type 1 error rate. Researchers should feel comfortable submitting such lines of research with an internal meta-analysis for publication. A better understanding of probabilities, accompanied by more realistic expectations of what real sets of studies look like, might be an important step in mitigating publication bias in the scientific literature.

  16. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  17. Association rule mining on grid monitoring data to detect error sources

    NASA Astrophysics Data System (ADS)

    Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin

    2010-04-01

    Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.

  18. Illiquidity premium and expected stock returns in the UK: A new approach

    NASA Astrophysics Data System (ADS)

    Chen, Jiaqi; Sherif, Mohamed

    2016-09-01

    This study examines the relative importance of liquidity risk for the time-series and cross-section of stock returns in the UK. We propose a simple way to capture the multidimensionality of illiquidity. Our analysis indicates that existing illiquidity measures have considerable asset specific components, which justifies our new approach. Further, we use an alternative test of the Amihud (2002) measure and parametric and non-parametric methods to investigate whether liquidity risk is priced in the UK. We find that the inclusion of the illiquidity factor in the capital asset pricing model plays a significant role in explaining the cross-sectional variation in stock returns, in particular with the Fama-French three-factor model. Further, using Hansen-Jagannathan non-parametric bounds, we find that the illiquidity-augmented capital asset pricing models yield a small distance error, other non-liquidity based models fail to yield economically plausible distance values. Our findings have important implications for managing the liquidity risk of equity portfolios.

  19. The effects of finite mass, adiabaticity, and isothermality in nonlinear plasma wave studies

    NASA Astrophysics Data System (ADS)

    Hellberg, Manfred A.; Verheest, Frank; Mace, Richard L.

    2018-03-01

    The propagation of arbitrary amplitude ion-acoustic solitons is investigated in a plasma containing cool adiabatic positive ions and hot electrons or negative ions. The latter can be described by polytropic pressure-density relations, both with or without the retention of inertial effects. For analytical tractability, the resulting Sagdeev pseudopotential needs to be expressed in terms of the hot negative species density, rather than the electrostatic potential. The inclusion of inertia is found to have no qualitative effect, but yields quantitative differences that vary monotonically with the mass ratio and the polytropic index. This result contrasts with results for analogous problems involving three species, where it was found that inertia could yield significant qualitative differences. Attention is also drawn to the fact that in the literature there are numerous papers in which species are assumed to behave adiabatically, where the isothermal assumption would be more appropriate. Such an assumption leads to quantitative errors and, in some instances, even qualitative gaps for "reverse polarity" solitons.

  20. Discrimination and quantification of Fe and Ni abundances in Genesis solar wind implanted collectors using X-ray standing wave fluorescence yield depth profiling with internal referencing

    DOE PAGES

    Choi, Y.; Eng, P.; Stubbs, J.; ...

    2016-08-21

    In this paper, X-ray standing wave fluorescence yield depth profiling was used to determine the solar wind implanted Fe and Ni fluences in a silicon-on-sapphire (SoS) Genesis collector (60326). An internal reference standardization method was developed based on fluorescence from Si and Al in the collector materials. Measured Fe fluence agreed well with that measured previously by us on a sapphire collector (50722) as well as SIMS results by Jurewicz et al. Measured Ni fluence was higher than expected by a factor of two; neither instrumental errors nor solar wind fractionation effects are considered significant perturbations to this value. Impuritymore » Ni within the epitaxial Si layer, if present, could explain the high Ni fluences and therefore needs further investigation. As they stand, these results are consistent with minor temporally-variable Fe and Ni fractionation on the timescale of a year.« less

  1. Effect of wet tropospheric path delays on estimation of geodetic baselines in the Gulf of California using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.

    1988-01-01

    Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.

  2. Land use and sediment yield

    USGS Publications Warehouse

    Leopold, Luna Bergere; Thomas, William L.

    1956-01-01

    When the vegetal cover is removed from a land surface, the rate of removal of the soil material, at least initially, increases rapidly. So well known is this principle that it hardly needs restatement.If attention is focused on any individual drainage basin in its natural state, large or small, and inquiry is made as to the rate of denudation, a quantitative answer is not easily obtained. The possible error in any computation of rate of sediment production from any given drainage basin is considerable. Significant variations are found in sediment yields from closely adjacent watersheds which appear to be generally similar. To make a quantitative evaluation of the change in the rate of denudation when the natural vegetation is disturbed is, therefore, even more difficult. Considering the fact that "soil conservation" has been promoted to the status of a science, our lack of ability to answer what is apparently so simple a question may seem surprising. Let us look at some of the reasons.

  3. Discrimination and quantification of Fe and Ni abundances in Genesis solar wind implanted collectors using X-ray standing wave fluorescence yield depth profiling with internal referencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.; Eng, P.; Stubbs, J.

    In this paper, X-ray standing wave fluorescence yield depth profiling was used to determine the solar wind implanted Fe and Ni fluences in a silicon-on-sapphire (SoS) Genesis collector (60326). An internal reference standardization method was developed based on fluorescence from Si and Al in the collector materials. Measured Fe fluence agreed well with that measured previously by us on a sapphire collector (50722) as well as SIMS results by Jurewicz et al. Measured Ni fluence was higher than expected by a factor of two; neither instrumental errors nor solar wind fractionation effects are considered significant perturbations to this value. Impuritymore » Ni within the epitaxial Si layer, if present, could explain the high Ni fluences and therefore needs further investigation. As they stand, these results are consistent with minor temporally-variable Fe and Ni fractionation on the timescale of a year.« less

  4. Dynamic Wireless Network Based on Open Physical Layer

    DTIC Science & Technology

    2011-02-18

    would yield the error- exponent optimal solutions. We solved this problem, and the detailed works are reported in [?]. It turns out that when Renyi ...is, during the communication session. A natural set of metrics of interests are the family of Renyi divergences. With a parameter of α that can be...tuned, Renyi entropy of a given distribution corresponds to the Shannon entropy, at α = 1, to the probability of detection error, at α =∞. This gives a

  5. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  6. Olive response to water availability: yield response functions, soil water content indicators and evaluation of adaptability to climate change

    NASA Astrophysics Data System (ADS)

    Riccardi, Maria; Alfieri, Silvia Maria; Basile, Angelo; Bonfante, Antonello; Menenti, Massimo; Monaco, Eugenia; De Lorenzi, Francesca

    2013-04-01

    Climate evolution, with the foreseen increase of temperature and frequency of drought events during the summer, could cause significant changes in the availability of water resources specially in the Mediterranean region. European countries need to encourage sustainable agriculture practices, reducing inputs, especially of water, and minimizing any negative impact on crop quantity and quality. Olive is an important crop in the Mediterranean region that has traditionally been cultivated with no irrigation and is known to attain acceptable production under dry farming. Therefore this crop will not compete for foreseen reduced water resources. However, a good quantitative knowledge must be available about effects of reduced precipitation and water availability on yield. Yield response functions, coupled with indicators of soil water availability, provide a quantitative description of the cultivar- specific behavior in relation to hydrological conditions. Yield response functions of 11 olive cultivars, typical of Mediterranean environment, were determined using experimental data (unpublished or reported in scientific literature). The yield was expressed as relative yield (Yr); the soil water availability was described by means of different indicators: relative soil water deficit (RSWD), relative evapotranspiration (RED) and transpiration deficit (RTD). Crops can respond nonlinearly to changes in their growing conditions and exhibit threshold responses, so for the yield functions of each olive cultivar both linear regression and threshold-slope models were considered to evaluate the best fit. The level of relative yield attained in rain-fed conditions was identified and defined as the acceptable yield level (Yrrainfed). The value of the indicator (RSWD, RED and RTD) corresponding to Yrrainfed was determined for each cultivar and indicated as the critical value of water availability. The error in the determination of the critical value was estimated. By means of a simulation model of the water flow in the soil-plant-atmosphere system, the indicators of soil water availability were calculated for different soil units in an area of Southern Italy, traditionally cultivated with olive. Simulations were performed for two climate scenarios: reference (1961-90) and future climate (2021-50). The potentiality of the indicators RSWD, RED and RTD to describe soil water availability was evaluated using simulated and experimental data. The analysis showed that RED values were correlated to RTD. The analysis demonstrated that RTD was more effective than RED in representing crop water availability RSWD is very well correlated to RTD and the degree of correlation depends of the period of deficit considered. The probability of adaptation of each cultivar was calculated for both climatic periods by comparing the critical values (and their error distribution) with soil availability indicators. Keywords: Olea europaea, soil water deficit, water availability critical value. The work was carried out within the Italian national project AGROSCENARI funded by the Ministry for Agricultural, Food and Forest Policies (MIPAAF, D.M. 8608/7303/2008)

  7. Conduct urban agglomeration with the baton of transportation.

    DOT National Transportation Integrated Search

    2013-12-01

    A key indicator of traffic activity patterns is commuting distance. Shorter commuting distances yield less traffic, fewer emissions, : and lower energy consumption. This study develops a spatial error seemingly unrelated regression model to investiga...

  8. Effects of preparation time and trial type probability on performance of anti- and pro-saccades.

    PubMed

    Pierce, Jordan E; McDowell, Jennifer E

    2016-02-01

    Cognitive control optimizes responses to relevant task conditions by balancing bottom-up stimulus processing with top-down goal pursuit. It can be investigated using the ocular motor system by contrasting basic prosaccades (look toward a stimulus) with complex antisaccades (look away from a stimulus). Furthermore, the amount of time allotted between trials, the need to switch task sets, and the time allowed to prepare for an upcoming saccade all impact performance. In this study the relative probabilities of anti- and pro-saccades were manipulated across five blocks of interleaved trials, while the inter-trial interval and trial type cue duration were varied across subjects. Results indicated that inter-trial interval had no significant effect on error rates or reaction times (RTs), while a shorter trial type cue led to more antisaccade errors and faster overall RTs. Responses following a shorter cue duration also showed a stronger effect of trial type probability, with more antisaccade errors in blocks with a low antisaccade probability and slower RTs for each saccade task when its trial type was unlikely. A longer cue duration yielded fewer errors and slower RTs, with a larger switch cost for errors compared to a short cue duration. Findings demonstrated that when the trial type cue duration was shorter, visual motor responsiveness was faster and subjects relied upon the implicit trial probability context to improve performance. When the cue duration was longer, increased fixation-related activity may have delayed saccade motor preparation and slowed responses, guiding subjects to respond in a controlled manner regardless of trial type probability. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set

    USGS Publications Warehouse

    Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.

    1996-01-01

    This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.

  10. Probabilistic Analysis of Pattern Formation in Monotonic Self-Assembly

    PubMed Central

    Moore, Tyler G.; Garzon, Max H.; Deaton, Russell J.

    2015-01-01

    Inspired by biological systems, self-assembly aims to construct complex structures. It functions through piece-wise, local interactions among component parts and has the potential to produce novel materials and devices at the nanoscale. Algorithmic self-assembly models the product of self-assembly as the output of some computational process, and attempts to control the process of assembly algorithmically. Though providing fundamental insights, these computational models have yet to fully account for the randomness that is inherent in experimental realizations, which tend to be based on trial and error methods. In order to develop a method of analysis that addresses experimental parameters, such as error and yield, this work focuses on the capability of assembly systems to produce a pre-determined set of target patterns, either accurately or perhaps only approximately. Self-assembly systems that assemble patterns that are similar to the targets in a significant percentage are “strong” assemblers. In addition, assemblers should predominantly produce target patterns, with a small percentage of errors or junk. These definitions approximate notions of yield and purity in chemistry and manufacturing. By combining these definitions, a criterion for efficient assembly is developed that can be used to compare the ability of different assembly systems to produce a given target set. Efficiency is a composite measure of the accuracy and purity of an assembler. Typical examples in algorithmic assembly are assessed in the context of these metrics. In addition to validating the method, they also provide some insight that might be used to guide experimentation. Finally, some general results are established that, for efficient assembly, imply that every target pattern is guaranteed to be assembled with a minimum common positive probability, regardless of its size, and that a trichotomy exists to characterize the global behavior of typical efficient, monotonic self-assembly systems in the literature. PMID:26421616

  11. Hubble Frontier Fields: systematic errors in strong lensing models of galaxy clusters - implications for cosmography

    NASA Astrophysics Data System (ADS)

    Acebron, Ana; Jullo, Eric; Limousin, Marceau; Tilquin, André; Giocoli, Carlo; Jauzac, Mathilde; Mahler, Guillaume; Richard, Johan

    2017-09-01

    Strong gravitational lensing by galaxy clusters is a fundamental tool to study dark matter and constrain the geometry of the Universe. Recently, the Hubble Space Telescope Frontier Fields programme has allowed a significant improvement of mass and magnification measurements but lensing models still have a residual root mean square between 0.2 arcsec and few arcseconds, not yet completely understood. Systematic errors have to be better understood and treated in order to use strong lensing clusters as reliable cosmological probes. We have analysed two simulated Hubble-Frontier-Fields-like clusters from the Hubble Frontier Fields Comparison Challenge, Ares and Hera. We use several estimators (relative bias on magnification, density profiles, ellipticity and orientation) to quantify the goodness of our reconstructions by comparing our multiple models, optimized with the parametric software lenstool, with the input models. We have quantified the impact of systematic errors arising, first, from the choice of different density profiles and configurations and, secondly, from the availability of constraints (spectroscopic or photometric redshifts, redshift ranges of the background sources) in the parametric modelling of strong lensing galaxy clusters and therefore on the retrieval of cosmological parameters. We find that substructures in the outskirts have a significant impact on the position of the multiple images, yielding tighter cosmological contours. The need for wide-field imaging around massive clusters is thus reinforced. We show that competitive cosmological constraints can be obtained also with complex multimodal clusters and that photometric redshifts improve the constraints on cosmological parameters when considering a narrow range of (spectroscopic) redshifts for the sources.

  12. How well does multiple OCR error correction generalize?

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Ringger, Eric K.; Walker, Daniel D.

    2013-12-01

    As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.

  13. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    PubMed Central

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-01-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science. PMID:25387525

  14. Effects of true density, compacted mass, compression speed, and punch deformation on the mean yield pressure.

    PubMed

    Gabaude, C M; Guillot, M; Gautier, J C; Saudemon, P; Chulia, D

    1999-07-01

    Compressibility properties of pharmaceutical materials are widely characterized by measuring the volume reduction of a powder column under pressure. Experimental data are commonly analyzed using the Heckel model from which powder deformation mechanisms are determined using mean yield pressure (Py). Several studies from the literature have shown the effects of operating conditions on the determination of Py and have pointed out the limitations of this model. The Heckel model requires true density and compacted mass values to determine Py from force-displacement data. It is likely that experimental errors will be introduced when measuring the true density and compacted mass. This study investigates the effects of true density and compacted mass on Py. Materials having different particle deformation mechanisms are studied. Punch displacement and applied pressure are measured for each material at two compression speeds. For each material, three different true density and compacted mass values are utilized to evaluate their effect on Py. The calculated variation of Py reaches 20%. This study demonstrates that the errors in measuring true density and compacted mass have a greater effect on Py than the errors incurred from not correcting the displacement measurements due to punch elasticity.

  15. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  16. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    NASA Astrophysics Data System (ADS)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  17. Further improvements to linear mixed models for genome-wide association studies.

    PubMed

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-12

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  18. Monitoring interannual variation in global crop yield using long-term AVHRR and MODIS observations

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyang; Zhang, Qingyuan

    2016-04-01

    Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS) data have been extensively applied for crop yield prediction because of their daily temporal resolution and a global coverage. This study investigated global crop yield using daily two band Enhanced Vegetation Index (EVI2) derived from AVHRR (1981-1999) and MODIS (2000-2013) observations at a spatial resolution of 0.05° (∼5 km). Specifically, EVI2 temporal trajectory of crop growth was simulated using a hybrid piecewise logistic model (HPLM) for individual pixels, which was used to detect crop phenological metrics. The derived crop phenology was then applied to calculate crop greenness defined as EVI2 amplitude and EVI2 integration during annual crop growing seasons, which was further aggregated for croplands in each country, respectively. The interannual variations in EVI2 amplitude and EVI2 integration were combined to correlate to the variation in cereal yield from 1982-2012 for individual countries using a stepwise regression model, respectively. The results show that the confidence level of the established regression models was higher than 90% (P value < 0.1) in most countries in the northern hemisphere although it was relatively poor in the southern hemisphere (mainly in Africa). The error in the yield predication was relatively smaller in America, Europe and East Asia than that in Africa. In the 10 countries with largest cereal production across the world, the prediction error was less than 9% during past three decades. This suggests that crop phenology-controlled greenness from coarse resolution satellite data has the capability of predicting national crop yield across the world, which could provide timely and reliable crop information for global agricultural trade and policymakers.

  19. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Near Global Mosaic of Mercury

    NASA Astrophysics Data System (ADS)

    Becker, K. J.; Robinson, M. S.; Becker, T. L.; Weller, L. A.; Turner, S.; Nguyen, L.; Selby, C.; Denevi, B. W.; Murchie, S. L.; McNutt, R. L.; Solomon, S. C.

    2009-12-01

    In 2008 the MESSENGER spacecraft made two close flybys (M1 and M2) of Mercury and imaged about 74% of the planet at a resolution of 1 km per pixel, and at higher resolution for smaller portions of the planet. The Mariner 10 spacecraft imaged about 42% of Mercury’s surface more than 30 years ago. Combining image data collected by the two missions yields coverage of about 83% of Mercury’s surface. MESSENGER will perform its third and final flyby of Mercury (M3) on 29 September 2009. This will yield approximately 86% coverage of Mercury, leaving only the north and south polar regions yet to be imaged by MESSENGER after orbit insertion in March 2011. A new global mosaic of Mercury was constructed using 325 images containing 3566 control points (8110 measures) from M1 and 225 images containing 1465 control points (3506 measures) from M2. The M3 flyby is shifted in subsolar longitude only by 4° from M2, so the added coverage is very small. However, this small slice of Mercury fills a gore in the mosaic between the M1 and M2 data and allows a complete cartographic tie around the equator. We will run a new bundle block adjustment with the additional images acquired from M3. This new edition of the MESSENGER Mercury Dual Imaging System (MDIS) Narrow Angle Camera (NAC) global mosaic of Mercury includes many improvements since the M2 flyby in October 2008. A new distortion model for the NAC camera greatly improves the image-to-image registration. Optical distortion correction is independent of pointing error correction, and both are required for a mosaic of high quality. The new distortion model alone reduced residual pointing errors for both flybys significantly; residual pixel error improved from 0.71 average (3.7 max) to 0.13 average (1.7 max) for M1 and from 0.72 average (4.8 max.) to 0.17 average (3.5 max) for M2. Analysis quantifying pivot motor position has led to development of a new model that improves accuracy of the pivot platform attitude. This model improves the accuracy of pointing knowledge and reduces overall registration errors between adjacent images. The net effect of these improvements is an overall offset of up to 10 km in some locations across the mosaic. In addition, the radiometric calibration process for the NAC has been improved to yield a better dynamic range across the mosaic by 20%. The new global mosaic of Mercury will be used in scientific analysis and aid in planning observation sequences leading up to and including orbit insertion of the MESSENGER spacecraft in 2011.

  1. Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction

    DTIC Science & Technology

    2009-08-20

    Tangential stress optimization convergence to uniform value  1.797  as a function of eccentric anomaly   E and Objective function value as a...up to the domain dimension, domainn . Equation (3.7) expands as truncation error round-off error decreasing step size FD e rr or 54...force, and E is Young’s modulus. Equations (3.31) and (3.32) may be directly integrated to yield the stress and displacement solutions, which, for no

  2. Data error and highly parameterized groundwater models

    USGS Publications Warehouse

    Hill, M.C.

    2008-01-01

    Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.

  3. Passive Ranging Using Infra-Red Atmospheric Attenuation

    DTIC Science & Technology

    2010-03-01

    was the Bomem MR-154 Fourier Transform Spectrometer (FTS). The FTS used both an HgCdTe and InSb detector . For this study, the primary source of data...also outfitted with an HgCdTe and InSb detector . Again, only data from the InSb detector was used. The spectral range of data collected was from...an uncertainty in transmittance of 0.01 (figure 20). This would yield an error in range of 6%. Other sources of error include detector noise or

  4. Predicting cotton yield of small field plots in a cotton breeding program using UAV imagery data

    NASA Astrophysics Data System (ADS)

    Maja, Joe Mari J.; Campbell, Todd; Camargo Neto, Joao; Astillo, Philip

    2016-05-01

    One of the major criteria used for advancing experimental lines in a breeding program is yield performance. Obtaining yield performance data requires machine picking each plot with a cotton picker, modified to weigh individual plots. Harvesting thousands of small field plots requires a great deal of time and resources. The efficiency of cotton breeding could be increased significantly while the cost could be decreased with the availability of accurate methods to predict yield performance. This work is investigating the feasibility of using an image processing technique using a commercial off-the-shelf (COTS) camera mounted on a small Unmanned Aerial Vehicle (sUAV) to collect normal RGB images in predicting cotton yield on small plot. An orthonormal image was generated from multiple images and used to process multiple, segmented plots. A Gaussian blur was used to eliminate the high frequency component of the images, which corresponds to the cotton pixels, and used image subtraction technique to generate high frequency pixel images. The cotton pixels were then separated using k-means cluster with 5 classes. Based on the current work, the calculated percentage cotton area was computed using the generated high frequency image (cotton pixels) divided by the total area of the plot. Preliminary results showed (five flights, 3 altitudes) that cotton cover on multiple pre-selected 227 sq. m. plots produce an average of 8% which translate to approximately 22.3 kgs. of cotton. The yield prediction equation generated from the test site was then use on a separate validation site and produced a prediction error of less than 10%. In summary, the results indicate that a COTS camera with an appropriate image processing technique can produce results that are comparable to expensive sensors.

  5. Incorporating Yearly Derived Winter Wheat Maps Into Winter Wheat Yield Forecasting Model

    NASA Technical Reports Server (NTRS)

    Skakun, S.; Franch, B.; Roger, J.-C.; Vermote, E.; Becker-Reshef, I.; Justice, C.; Santamaría-Artigas, A.

    2016-01-01

    Wheat is one of the most important cereal crops in the world. Timely and accurate forecast of wheat yield and production at global scale is vital in implementing food security policy. Becker-Reshef et al. (2010) developed a generalized empirical model for forecasting winter wheat production using remote sensing data and official statistics. This model was implemented using static wheat maps. In this paper, we analyze the impact of incorporating yearly wheat masks into the forecasting model. We propose a new approach of producing in season winter wheat maps exploiting satellite data and official statistics on crop area only. Validation on independent data showed that the proposed approach reached 6% to 23% of omission error and 10% to 16% of commission error when mapping winter wheat 2-3 months before harvest. In general, we found a limited impact of using yearly winter wheat masks over a static mask for the study regions.

  6. Geometric analysis and restitution of digital multispectral scanner data arrays

    NASA Technical Reports Server (NTRS)

    Baker, J. R.; Mikhail, E. M.

    1975-01-01

    An investigation was conducted to define causes of geometric defects within digital multispectral scanner (MSS) data arrays, to analyze the resulting geometric errors, and to investigate restitution methods to correct or reduce these errors. Geometric transformation relationships for scanned data, from which collinearity equations may be derived, served as the basis of parametric methods of analysis and restitution of MSS digital data arrays. The linearization of these collinearity equations is presented. Algorithms considered for use in analysis and restitution included the MSS collinearity equations, piecewise polynomials based on linearized collinearity equations, and nonparametric algorithms. A proposed system for geometric analysis and restitution of MSS digital data arrays was used to evaluate these algorithms, utilizing actual MSS data arrays. It was shown that collinearity equations and nonparametric algorithms both yield acceptable results, but nonparametric algorithms possess definite advantages in computational efficiency. Piecewise polynomials were found to yield inferior results.

  7. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  8. Benefit-risk Evaluation for Diagnostics: A Framework (BED-FRAME).

    PubMed

    Evans, Scott R; Pennello, Gene; Pantoja-Galicia, Norberto; Jiang, Hongyu; Hujer, Andrea M; Hujer, Kristine M; Manca, Claudia; Hill, Carol; Jacobs, Michael R; Chen, Liang; Patel, Robin; Kreiswirth, Barry N; Bonomo, Robert A

    2016-09-15

    The medical community needs systematic and pragmatic approaches for evaluating the benefit-risk trade-offs of diagnostics that assist in medical decision making. Benefit-Risk Evaluation of Diagnostics: A Framework (BED-FRAME) is a strategy for pragmatic evaluation of diagnostics designed to supplement traditional approaches. BED-FRAME evaluates diagnostic yield and addresses 2 key issues: (1) that diagnostic yield depends on prevalence, and (2) that different diagnostic errors carry different clinical consequences. As such, evaluating and comparing diagnostics depends on prevalence and the relative importance of potential errors. BED-FRAME provides a tool for communicating the expected clinical impact of diagnostic application and the expected trade-offs of diagnostic alternatives. BED-FRAME is a useful fundamental supplement to the standard analysis of diagnostic studies that will aid in clinical decision making. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  9. Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control

    NASA Astrophysics Data System (ADS)

    Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.

    2005-01-01

    This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.

  10. DD3MAT - a code for yield criteria anisotropy parameters identification.

    NASA Astrophysics Data System (ADS)

    Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.

    2016-08-01

    This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.

  11. [Regional scale remote sensing-based yield estimation of winter wheat by using MODIS-NDVI data: a case study of Jining City in Shandong Province].

    PubMed

    Ren, Jianqiang; Chen, Zhongxin; Tang, Huajun

    2006-12-01

    Taking Jining City of Shandong Province, one of the most important winter wheat production regions in Huanghuaihai Plain as an example, the winter wheat yield was estimated by using the 250 m MODIS-NDVI data smoothed by Savitzky-Golay filter. The NDVI values between 0. 20 and 0. 80 were selected, and the sum of NDVI value for each county was calculated to build its relation with winter wheat yield. By using stepwise regression method, the linear regression model between NDVI and winter wheat yield was established, with the precision validated by the ground survey data. The results showed that the relative error of predicted yield was between -3.6% and 3.9%, suggesting that the method was relatively accurate and feasible.

  12. Distributed Beamforming in a Swarm UAV Network

    DTIC Science & Technology

    2008-03-01

    indicates random ( noncoherent ) transmission. For coherent transmission, there are no phase differences, so 02 ≅∆ and Equation (2.32) yields...so ∞→∆2 is assumed and Equation (2.32) yields ( ) N R GGPP bttL 22 2 4π λ = (2.34) This result occurs for N noncoherent transmitters, such as... Noncoherent Coherent Figure 7. Monte Carlo simulation results with 100 trials at each value of N, RMS error = 1.0919 degrees. Figure 7 shows a

  13. Instructional design affects the efficacy of simulation-based training in central venous catheterization.

    PubMed

    Craft, Christopher; Feldon, David F; Brown, Eric A

    2014-05-01

    Simulation-based learning is a common educational tool in health care training and frequently involves instructional designs based on Experiential Learning Theory (ELT). However, little research explores the effectiveness and efficiency of different instructional design methodologies appropriate for simulations. The aim of this study was to compare 2 instructional design models, ELT and Guided Experiential Learning (GEL), to determine which is more effective for training the central venous catheterization procedure. Using a quasi-experimental randomized block design, nurse anesthetists completed training under 1 of the 2 instructional design models. Performance was assessed using a checklist of central venous catheterization performance, pass rates, and critical action errors. Participants in the GEL condition performed significantly better than those in the ELT condition on the overall checklist score after controlling for individual practice time (F[1, 29] = 4.021, P = .027, Cohen's d = .71), had higher pass rates (P = .006, Cohen's d = 1.15), and had lower rates of failure due to critical action errors (P = .038, Cohen's d = .81). The GEL model of instructional design is significantly more effective than ELT for simulation-based learning of the central venous catheterization procedure, yielding large differences in effect size. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    PubMed

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  15. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  16. Influence of body weight and body conformation on the pressure-volume curve during capnoperitoneum in dogs.

    PubMed

    Dorn, Melissa J; Bockstahler, Barbara A; Dupré, Gilles P

    2017-05-01

    OBJECTIVE To evaluate the pressure-volume relationship during capnoperitoneum in dogs and effects of body weight and body conformation. ANIMALS 86 dogs scheduled for routine laparoscopy. PROCEDURES Dogs were allocated into 3 groups on the basis of body weight. Body measurements, body condition score, and body conformation indices were calculated. Carbon dioxide was insufflated into the abdomen with a syringe, and pressure was measured at the laparoscopic cannula. Volume and pressure data were processed, and the yield point, defined by use of a cutoff volume (COV) and cutoff pressure (COP), was calculated. RESULTS 20 dogs were excluded because of recording errors, air leakage attributable to surgical flaws, or trocar defects. For the remaining 66 dogs, the pressure-volume curve was linear-like until the yield point was reached, and then it became visibly exponential. Mean ± SD COP was 5.99 ± 0.805 mm Hg. No correlation was detected between yield point, body variables, or body weight. Mean COV was 1,196.2 ± 697.9 mL (65.15 ± 20.83 mL of CO 2 /kg), and COV was correlated significantly with body weight and one of the body condition indices but not with other variables. CONCLUSION AND CLINICAL RELEVANCE In this study, there was a similar COP for all dogs of all sizes. In addition, results suggested that increasing the abdominal pressure after the yield point was reached did not contribute to a substantial increase in working space in the abdomen. No correlation was found between yield point, body variables, and body weight.

  17. Scenario analysis of fertilizer management practices for N2O mitigation from corn systems in Canada.

    PubMed

    Abalos, Diego; Smith, Ward N; Grant, Brian B; Drury, Craig F; MacKell, Sarah; Wagner-Riddle, Claudia

    2016-12-15

    Effective management of nitrogen (N) fertilizer application by farmers provides great potential for reducing emissions of the potent greenhouse gas nitrous oxide (N 2 O). However, such potential is rarely achieved because our understanding of what practices (or combination of practices) lead to N 2 O reductions without compromising crop yields remains far from complete. Using scenario analysis with the process-based model DNDC, this study explored the effects of nine fertilizer practices on N 2 O emissions and crop yields from two corn production systems in Canada. The scenarios differed in: timing of fertilizer application, fertilizer rate, number of applications, fertilizer type, method of application and use of nitrification/urease inhibitors. Statistical analysis showed that during the initial calibration and validation stages the simulated results had no significant total error or bias compared to measured values, yet grain yield estimations warrant further model improvement. Sidedress fertilizer applications reduced yield-scaled N 2 O emissions by c. 60% compared to fall fertilization. Nitrification inhibitors further reduced yield-scaled N 2 O emissions by c. 10%; urease inhibitors had no effect on either N 2 O emissions or crop productivity. The combined adoption of split fertilizer application with inhibitors at a rate 10% lower than the conventional application rate (i.e. 150kgNha -1 ) was successful, but the benefits were lower than those achieved with single fertilization at sidedress. Our study provides a comprehensive assessment of fertilizer management practices that enables policy development regarding N 2 O mitigation from agricultural soils in Canada. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Fast retinal layer segmentation of spectral domain optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Zhang, Tianqiao; Song, Zhangjun; Wang, Xiaogang; Zheng, Huimin; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao

    2015-09-01

    An approach to segment macular layer thicknesses from spectral domain optical coherence tomography has been proposed. The main contribution is to decrease computational costs while maintaining high accuracy via exploring Kalman filtering, customized active contour, and curve smoothing. Validation on 21 normal volumes shows that 8 layer boundaries could be segmented within 5.8 s with an average layer boundary error <2.35 μm. It has been compared with state-of-the-art methods for both normal and age-related macular degeneration cases to yield similar or significantly better accuracy and is 37 times faster. The proposed method could be a potential tool to clinically quantify the retinal layer boundaries.

  19. Advanced Control Algorithms for Compensating the Phase Distortion Due to Transport Delay in Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.

  20. Medical errors in primary care clinics – a cross sectional study

    PubMed Central

    2012-01-01

    Background Patient safety is vital in patient care. There is a lack of studies on medical errors in primary care settings. The aim of the study is to determine the extent of diagnostic inaccuracies and management errors in public funded primary care clinics. Methods This was a cross-sectional study conducted in twelve public funded primary care clinics in Malaysia. A total of 1753 medical records were randomly selected in 12 primary care clinics in 2007 and were reviewed by trained family physicians for diagnostic, management and documentation errors, potential errors causing serious harm and likelihood of preventability of such errors. Results The majority of patient encounters (81%) were with medical assistants. Diagnostic errors were present in 3.6% (95% CI: 2.2, 5.0) of medical records and management errors in 53.2% (95% CI: 46.3, 60.2). For management errors, medication errors were present in 41.1% (95% CI: 35.8, 46.4) of records, investigation errors in 21.7% (95% CI: 16.5, 26.8) and decision making errors in 14.5% (95% CI: 10.8, 18.2). A total of 39.9% (95% CI: 33.1, 46.7) of these errors had the potential to cause serious harm. Problems of documentation including illegible handwriting were found in 98.0% (95% CI: 97.0, 99.1) of records. Nearly all errors (93.5%) detected were considered preventable. Conclusions The occurrence of medical errors was high in primary care clinics particularly with documentation and medication errors. Nearly all were preventable. Remedial intervention addressing completeness of documentation and prescriptions are likely to yield reduction of errors. PMID:23267547

  1. CDGPS-Based Relative Navigation for Multiple Spacecraft

    NASA Technical Reports Server (NTRS)

    Mitchell, Megan Leigh

    2004-01-01

    This thesis investigates the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters for formation flying spacecraft. This work analyzes the relationship between the Extended Kalman Filter (EKF) design parameters and the resulting estimation accuracies, and in particular, the effect of the process and measurement noises on the semimajor axis error. This analysis clearly demonstrates that CDGPS-based relative navigation Kalman filters yield good estimation performance without satisfying the strong correlation property that previous work had associated with "good" navigation filters. Several examples are presented to show that the Kalman filter can be forced to create solutions with stronger correlations, but these always result in larger semimajor axis errors. These linear and nonlinear simulations also demonstrated the crucial role of the process noise in determining the semimajor axis knowledge. More sophisticated nonlinear models were included to reduce the propagation error in the estimator, but for long time steps and large separations, the EKF, which only uses a linearized covariance propagation, yielded very poor performance. In contrast, the CDGPS-based Unscented Kalman relative navigation Filter (UKF) handled the dynamic and measurement nonlinearities much better and yielded far superior performance than the EKF. The UKF produced good estimates for scenarios with long baselines and time steps for which the EKF would diverge rapidly. A hardware-in-the-loop testbed that is compatible with the Spirent Simulator at NASA GSFC was developed to provide a very flexible and robust capability for demonstrating CDGPS technologies in closed-loop. This extended previous work to implement the decentralized relative navigation algorithms in real time.

  2. Longitudinal data analysis of polymorphisms in the κ-casein and β-lactoglobulin genes shows differential effects along the trajectory of the lactation curve in tropical dairy goats.

    PubMed

    Cardona, Samir Julián Calvo; Cadavid, Henry Cardona; Corrales, Juan David; Munilla, Sebastián; Cantet, Rodolfo J C; Rogberg-Muñoz, Andrés

    2016-09-01

    The κ-casein (CSN-3) and β-lactoglobulin (BLG) genes are extensively polymorphic in ruminants. Several association studies have estimated the effects of polymorphisms in these genes on milk yield, milk composition, and cheese-manufacturing properties. Usually, these results are based on production integrated over the lactation curve or on cross-sectional studies at specific days in milk (DIM). However, as differential expression of milk protein genes occurs over lactation, the effect of the polymorphisms may change over time. In this study, we fitted a mixed-effects regression model to test-day records of milk yield and milk quality traits (fat, protein, and total solids yields) from Colombian tropical dairy goats. We used the well-characterized A/B polymorphisms in the CSN-3 and BLG genes. We argued that this approach provided more efficient estimators than cross-sectional designs, given the same number and pattern of observations, and allowed exclusion of between-subject variation from model error. The BLG genotype AA showed a greater performance than the BB genotype for all traits along the whole lactation curve, whereas the heterozygote showed an intermediate performance. We observed no such constant pattern for the CSN-3 gene between the AA homozygote and the heterozygote (the BB genotype was absent from the sample). The differences among the genotypic effects of the BLG and the CSN-3 polymorphisms were statistically significant during peak and mid lactation (around 40-160 DIM) for the BLG gene and only for mid lactation (80-145 DIM) for the CSN-3 gene. We also estimated the additive and dominant effects of the BLG locus. The locus showed a statistically significant additive behavior along the whole lactation trajectory for all quality traits, whereas for milk yield the effect was not significant at later stages. In turn, we detected a statistically significant dominance effect only for fat yield in the early and peak stages of lactation (at about 1-45 DIM). The longitudinal analysis of test-day records allowed us to estimate the differential effects of polymorphisms along the lactation curve, pointing toward stages that could be affected by the gene. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  4. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  5. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  6. Restoration of the Patient-Specific Anatomy of the Proximal and Distal Parts of the Humerus: Statistical Shape Modeling Versus Contralateral Registration Method.

    PubMed

    Vlachopoulos, Lazaros; Lüthi, Marcel; Carrillo, Fabio; Gerber, Christian; Székely, Gábor; Fürnstahl, Philipp

    2018-04-18

    In computer-assisted reconstructive surgeries, the contralateral anatomy is established as the best available reconstruction template. However, existing intra-individual bilateral differences or a pathological, contralateral humerus may limit the applicability of the method. The aim of the study was to evaluate whether a statistical shape model (SSM) has the potential to predict accurately the pretraumatic anatomy of the humerus from the posttraumatic condition. Three-dimensional (3D) triangular surface models were extracted from the computed tomographic data of 100 paired cadaveric humeri without a pathological condition. An SSM was constructed, encoding the characteristic shape variations among the individuals. To predict the patient-specific anatomy of the proximal (or distal) part of the humerus with the SSM, we generated segments of the humerus of predefined length excluding the part to predict. The proximal and distal humeral prediction (p-HP and d-HP) errors, defined as the deviation of the predicted (bone) model from the original (bone) model, were evaluated. For comparison with the state-of-the-art technique, i.e., the contralateral registration method, we used the same segments of the humerus to evaluate whether the SSM or the contralateral anatomy yields a more accurate reconstruction template. The p-HP error (mean and standard deviation, 3.8° ± 1.9°) using 85% of the distal end of the humerus to predict the proximal humeral anatomy was significantly smaller (p = 0.001) compared with the contralateral registration method. The difference between the d-HP error (mean, 5.5° ± 2.9°), using 85% of the proximal part of the humerus to predict the distal humeral anatomy, and the contralateral registration method was not significant (p = 0.61). The restoration of the humeral length was not significantly different between the SSM and the contralateral registration method. SSMs accurately predict the patient-specific anatomy of the proximal and distal aspects of the humerus. The prediction errors of the SSM depend on the size of the healthy part of the humerus. The prediction of the patient-specific anatomy of the humerus is of fundamental importance for computer-assisted reconstructive surgeries.

  7. The effect of flow data resolution on sediment yield estimation and channel design

    NASA Astrophysics Data System (ADS)

    Rosburg, Tyler T.; Nelson, Peter A.; Sholtes, Joel S.; Bledsoe, Brian P.

    2016-07-01

    The decision to use either daily-averaged or sub-daily streamflow records has the potential to impact the calculation of sediment transport metrics and stream channel design. Using bedload and suspended load sediment transport measurements collected at 138 sites across the United States, we calculated the effective discharge, sediment yield, and half-load discharge using sediment rating curves over long time periods (median record length = 24 years) with both daily-averaged and sub-daily streamflow records. A comparison of sediment transport metrics calculated with both daily-average and sub-daily stream flow data at each site showed that daily-averaged flow data do not adequately represent the magnitude of high stream flows at hydrologically flashy sites. Daily-average stream flow data cause an underestimation of sediment transport and sediment yield (including the half-load discharge) at flashy sites. The degree of underestimation was correlated with the level of flashiness and the exponent of the sediment rating curve. No consistent relationship between the use of either daily-average or sub-daily streamflow data and the resultant effective discharge was found. When used in channel design, computed sediment transport metrics may have errors due to flow data resolution, which can propagate into design slope calculations which, if implemented, could lead to unwanted aggradation or degradation in the design channel. This analysis illustrates the importance of using sub-daily flow data in the calculation of sediment yield in urbanizing or otherwise flashy watersheds. Furthermore, this analysis provides practical charts for estimating and correcting these types of underestimation errors commonly incurred in sediment yield calculations.

  8. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    PubMed Central

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  9. Assessing the use of immersive virtual reality, mouse and touchscreen in pointing and dragging-and-dropping tasks among young, middle-aged and older adults.

    PubMed

    Chen, Jiayin; Or, Calvin

    2017-11-01

    This study assessed the use of an immersive virtual reality (VR), a mouse and a touchscreen for one-directional pointing, multi-directional pointing, and dragging-and-dropping tasks involving targets of smaller and larger widths by young (n = 18; 18-30 years), middle-aged (n = 18; 40-55 years) and older adults (n = 18; 65-75 years). A three-way, mixed-factorial design was used for data collection. The dependent variables were the movement time required and the error rate. Our main findings were that the participants took more time and made more errors in using the VR input interface than in using the mouse or the touchscreen. This pattern applied in all three age groups in all tasks, except for multi-directional pointing with a larger target width among the older group. Overall, older adults took longer to complete the tasks and made more errors than young or middle-aged adults. Larger target widths yielded shorter movement times and lower error rates in pointing tasks, but larger targets yielded higher rates of error in dragging-and-dropping tasks. Our study indicated that any other virtual environments that are similar to those we tested may be more suitable for displaying scenes than for manipulating objects that are small and require fine control. Although interacting with VR is relatively difficult, especially for older adults, there is still potential for older adults to adapt to that interface. Furthermore, adjusting the width of objects according to the type of manipulation required might be an effective way to promote performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. MkMRCC, APUCC and APUBD approaches to 1,n-didehydropolyene diradicals: the nature of through-bond exchange interactions

    NASA Astrophysics Data System (ADS)

    Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2010-10-01

    Mukherjee-type (Mk) state specific (SS) multi-reference (MR) coupled-cluster (CC) calculations of 1,n-didehydropolyene diradicals were carried out to elucidate singlet-triplet energy gaps via through-bond coupling between terminal radicals. Spin-unrestricted Hartree-Fock (UHF) based coupled-cluster (CC) computations of these diradicals were also performed. Comparison between symmetry-adapted MkMRCC and broken-symmetry (BS) UHF-CC computational results indicated that spin-contamination error of UHF-CC solutions was left at the SD level, although it had been thought that this error was negligible for the CC scheme in general. In order to eliminate the spin contamination error, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed eliminated the error to yield good agreement with MRCC in energy. The CCD with spin-unrestricted Brueckner's orbital (UB) was also employed for these polyene diradicals, showing that large spin-contamination errors at UHF solutions are dramatically improved, and therefore AP scheme for UBD removed easily the rest of spin-contaminations. Pure- and hybrid-density functional theory (DFT) calculations of the species were also performed. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid DFT. The AP DFT calculations yielded the singlet-triplet energy gaps that were in good agreement with those of MRCC, AP UHF-CC and AP UB-CC. Chemical indices such as the diradical character were calculated with all these methods. Implications of the present computational results are discussed in relation to previous RMRCC calculations of diradical species and BS calculations of large exchange coupled systems.

  11. Interactions of task and subject variables among continuous performance tests.

    PubMed

    Denney, Colin B; Rapport, Mark D; Chung, Kyong-Mee

    2005-04-01

    Contemporary models of working memory suggest that target paradigm (TP) and target density (TD) should interact as influences on error rates derived from continuous performance tests (CPTs). The present study evaluated this hypothesis empirically in a typically developing, ethnically diverse sample of children. The extent to which scores based on different combinations of these task parameters showed different patterns of relationship to age, intelligence, and gender was also assessed. Four continuous performance tests were derived by combining two target paradigms (AX and repeated letter target stimuli) with two levels of target density (8.3% and 33%). Variations in mean omission (OE) and commission (CE) error rates were examined within and across combinations of TP and TD. In addition, a nested series of structural equation models was utilized to examine patterns of relationship among error rates, age, intelligence, and gender. Target paradigm and target density interacted as influences on error rates. Increasing density resulted in higher OE and CE rates for the AX paradigm. In contrast, the high density condition yielded a decline in OE rates accompanied by a small increase in CEs using the repeated letter CPT. Target paradigms were also distinguishable on the basis of age when using OEs as the performance measure, whereas combinations of age and intelligence distinguished between density levels but not target paradigms using CEs as the dependent measure. Different combinations of target paradigm and target density appear to yield scores that are conceptually and psychometrically distinguishable. Consequently, developmentally appropriate interpretation of error rates across tasks may require (a) careful analysis of working memory and attentional resources required for successful performance, and (b) normative data bases that are differently stratified with respect to combinations of age and intelligence.

  12. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  13. Analysis of operator splitting errors for near-limit flame simulations

    NASA Astrophysics Data System (ADS)

    Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.

    2017-04-01

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.

  14. Analysis of operator splitting errors for near-limit flame simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhen; Zhou, Hua; Li, Shan

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less

  15. Disrupted Prediction Error Links Excessive Amygdala Activation to Excessive Fear.

    PubMed

    Sengupta, Auntora; Winters, Bryony; Bagley, Elena E; McNally, Gavan P

    2016-01-13

    Basolateral amygdala (BLA) is critical for fear learning, and its heightened activation is widely thought to underpin a variety of anxiety disorders. Here we used chemogenetic techniques in rats to study the consequences of heightened BLA activation for fear learning and memory, and to specifically identify a mechanism linking increased activity of BLA glutamatergic neurons to aberrant fear. We expressed the excitatory hM3Dq DREADD in rat BLA glutamatergic neurons and showed that CNO acted selectively to increase their activity, depolarizing these neurons and increasing their firing rates. This chemogenetic excitation of BLA glutamatergic neurons had no effect on the acquisition of simple fear learning, regardless of whether this learning led to a weak or strong fear memory. However, in an associative blocking task, chemogenetic excitation of BLA glutamatergic neurons yielded significant learning to a blocked conditioned stimulus, which otherwise should not have been learned about. Moreover, in an overexpectation task, chemogenetic manipulation of BLA glutamatergic neurons prevented use of negative prediction error to reduce fear learning, leading to significant impairments in fear inhibition. These effects were not attributable to the chemogenetic manipulation enhancing arousal, increasing asymptotic levels of fear learning or fear memory consolidation. Instead, chemogenetic excitation of BLA glutamatergic neurons disrupted use of prediction error to regulate fear learning. Several neuropsychiatric disorders are characterized by heightened activation of the amygdala. This heightened activation has been hypothesized to underlie increased emotional reactivity, fear over generalization, and deficits in fear inhibition. Yet the mechanisms linking heightened amygdala activation to heightened emotional learning are elusive. Here we combined chemogenetic excitation of rat basolateral amygdala glutamatergic neurons with a variety of behavioral approaches to show that, although simple fear learning is unaffected, the use of prediction error to regulate this learning is profoundly disrupted, leading to formation of inappropriate fear associations and impaired fear inhibition. Copyright © 2016 the authors 0270-6474/16/360385-11$15.00/0.

  16. EEG-based decoding of error-related brain activity in a real-world driving task

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Chavarriaga, R.; Khaliliardali, Z.; Gheorghe, L.; Iturrate, I.; Millán, J. d. R.

    2015-12-01

    Objectives. Recent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver’s intended turning direction before reaching road intersections. Approach. We executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject’s intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests. Results. An average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver’s intention coincides with the advice provided by the driving assistant in a real car. Significance. The study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver’s error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.

  17. Bayesian analysis of input uncertainty in hydrological modeling: 2. Application

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.

    2006-03-01

    The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.

  18. Fixing Stellarator Magnetic Surfaces

    NASA Astrophysics Data System (ADS)

    Hanson, James D.

    1999-11-01

    Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.

  19. Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans Gougar

    2010-07-01

    ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less

  20. Spectroscopy Made Easy: Evolution

    NASA Astrophysics Data System (ADS)

    Piskunov, Nikolai; Valenti, Jeff A.

    2017-01-01

    Context. The Spectroscopy Made Easy (SME) package has become a popular tool for analyzing stellar spectra, often in connection with large surveys or exoplanet research. SME has evolved significantly since it was first described in 1996, but many of the original caveats and potholes still haunt users. The main drivers for this paper are complexity of the modeling task, the large user community, and the massive effort that has gone into SME. Aims: We do not intend to give a comprehensive introduction to stellar atmospheres, but will describe changes to key components of SME: the equation of state, opacities, and radiative transfer. We will describe the analysis and fitting procedure and investigate various error sources that affect inferred parameters. Methods: We review the current status of SME, emphasizing new algorithms and methods. We describe some best practices for using the package, based on lessons learned over two decades of SME usage. We present a new way to assess uncertainties in derived stellar parameters. Results: Improvements made to SME, better line data, and new model atmospheres yield more realistic stellar spectra, but in many cases systematic errors still dominate over measurement uncertainty. Future enhancements are outlined.

  1. Nonspinning numerical relativity waveform surrogates: assessing the model

    NASA Astrophysics Data System (ADS)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  2. Eigenvector method for umbrella sampling enables error analysis

    PubMed Central

    Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

    2016-01-01

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  4. Topological quantum error correction in the Kitaev honeycomb model

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  5. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  6. Use of vegetation health data for estimation of aus rice yield in bangladesh.

    PubMed

    Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y; Nizamuddin, Mohammad; Goldberg, Mitch

    2009-01-01

    Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991-2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March-April (weeks 8-13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost.

  7. Use of Vegetation Health Data for Estimation of Aus Rice Yield in Bangladesh

    PubMed Central

    Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y.; Nizamuddin, Mohammad; Goldberg, Mitch

    2009-01-01

    Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991–2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March–April (weeks 8–13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost. PMID:22574057

  8. Real-time line-width measurements: a new feature for reticle inspection systems

    NASA Astrophysics Data System (ADS)

    Eran, Yair; Greenberg, Gad; Joseph, Amnon; Lustig, Cornel; Mizrahi, Eyal

    1997-07-01

    The significance of line width control in mask production has become greater with the lessening of defect size. There are two conventional methods used for controlling line widths dimensions which employed in the manufacturing of masks for sub micron devices. These two methods are the critical dimensions (CD) measurement and the detection of edge defects. Achieving reliable and accurate control of line width errors is one of the most challenging tasks in mask production. Neither of the two methods cited above (namely CD measurement and the detection of edge defects) guarantees the detection of line width errors with good sensitivity over the whole mask area. This stems from the fact that CD measurement provides only statistical data on the mask features whereas applying edge defect detection method checks defects on each edge by itself, and does not supply information on the combined result of error detection on two adjacent edges. For example, a combination of a small edge defect together with a CD non- uniformity which are both within the allowed tolerance, may yield a significant line width error, which will not be detected using the conventional methods (see figure 1). A new approach for the detection of line width errors which overcomes this difficulty is presented. Based on this approach, a new sensitive line width error detector was developed and added to Orbot's RT-8000 die-to-database reticle inspection system. This innovative detector operates continuously during the mask inspection process and scans (inspects) the entire area of the reticle for line width errors. The detection is based on a comparison of measured line width that are taken on both the design database and the scanned image of the reticle. In section 2, the motivation for developing this new detector is presented. The section covers an analysis of various defect types, which are difficult to detect using conventional edge detection methods or, alternatively, CD measurements. In section 3, the basic concept of the new approach is introduced together with a description of the new detector and its characteristics. In section 4, the calibration process that took place in order to achieve reliable and repeatable line width measurements is presented. The description of an experiments conducted in order to evaluate the sensitivity of the new detector is given in section 5, followed by a report of the results of this evaluation. The conclusions are presented in section 6.

  9. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  10. Comparison of undulation difference accuracies using gravity anomalies and gravity disturbances. [for ocean geoid

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1980-01-01

    Errors in the outer zone contribution to oceanic undulation differences computed from a finite set of potential coefficients based on satellite measurements of gravity anomalies and gravity disturbances are analyzed. Equations are derived for the truncation errors resulting from the lack of high-degree coefficients and the commission errors arising from errors in the available lower-degree coefficients, and it is assumed that the inner zone (spherical cap) is sufficiently covered by surface gravity measurements in conjunction with altimetry or by gravity anomaly data. Numerical computations of error for various observational conditions reveal undulation difference errors ranging from 13 to 15 cm and from 6 to 36 cm in the cases of gravity anomaly and gravity disturbance data, respectively for a cap radius of 10 deg and mean anomalies accurate to 10 mgal, with a reduction of errors in both cases to less than 10 cm as mean anomaly accuracy is increased to 1 mgal. In the absence of a spherical cap, both cases yield error estimates of 68 cm for an accuracy of 1 mgal and between 93 and 160 cm for the lesser accuracy, which can be reduced to about 110 cm by the introduction of a perfect 30-deg reference field.

  11. Yield Mapping for Different Crops in Sudano-Sahelian Smallholder Farming Systems: Results Based on Metric Worldview and Decametric SPOT-5 Take5 Time Series

    NASA Astrophysics Data System (ADS)

    Blaes, X.; Lambert, M.-J.; Chome, G.; Traore, P. S.; de By, R. A.; Defourny, P.

    2016-08-01

    Efficient yield mapping in Sudano-Sahelian Africa, characterized by a very heterogeneous landscape, is crucial to help ensure food security and decrease smallholder farmers' vulnerability. Thanks to an unprecedented in-situ data and HR and VHR remote sensing time series collected in the Koutiala district (in south-eastern Mali), the yield and some key factors of yield estimation were estimated. A crop-specific biomass map was derived with a mean absolute error of 20% using metric WorldView and 25% using decametric SPOT-5 TAKE5 image time series. The very high intra- and inter-field heterogeneity was captured efficiently. The presence of trees in the fields led to a general overestimation of yields, while the mixed pixels at the field borders introduced noise in the biomass predictions.

  12. The Forced Soft Spring Equation

    ERIC Educational Resources Information Center

    Fay, T. H.

    2006-01-01

    Through numerical investigations, this paper studies examples of the forced Duffing type spring equation with [epsilon] negative. By performing trial-and-error numerical experiments, the existence is demonstrated of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions. Subharmonic boundaries are…

  13. Fluidized-bed pyrolysis of oil shale: oil yield, composition, and kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, J H; Huss, E B; Ott, L L

    1982-09-01

    A quartz isothermal fluidized-bed reactor has been used to measure kinetics and oil properties relevant to surface processing of oil shale. The rate of oil formation has been described with two sequential first-order rate equations characterized by two rate constants, k/sub 1/ = 2.18 x 10/sup 10/ exp(-41.6 kcal/RT) s/sup -1/ and k/sub 2/ = 4.4 x 10/sup 6/ exp(-29.7 kcal/RT) s/sup -1/. These rate constants together with an expression for the appropriate weighting coefficients describe approximately 97/sup +/% of the total oil produced. A description is given of the results of different attempts to mathematically describe the data inmore » a manner suitable for modeling applications. Preliminary results are also presented for species-selective kinetics of methane, ethene, ethane and hydrogen, where the latter is clearly distinguished as the product of a distinct intermediate. Oil yields from Western oil shale are approximately 100% Fischer assay. Oil composition is as expected based on previous work and the higher heating rates (temperatures) inherent in fluidized-bed pyrolysis. Neither the oil yield, composition nor the kinetics varied with particle size between 0.2 and 2.0 mm within experimental error. The qualitatively expected change in oil composition due to cracking was observed over the temperature range studied (460 to 540/sup 0/C). Eastern shale exhibited significantly faster kinetics and higher oil yields than did Western shale.« less

  14. Yield estimation of corn with multispectral data and the potential of using imaging spectrometers

    NASA Astrophysics Data System (ADS)

    Bach, Heike

    1997-05-01

    In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.

  15. Reliability of landmark identification in cephalometric radiography acquired by a storage phosphor imaging system.

    PubMed

    Chen, Y-J; Chen, S-K; Huang, H-W; Yao, C-C; Chang, H-F

    2004-09-01

    To compare the cephalometric landmark identification on softcopy and hardcopy of direct digital cephalography acquired by a storage-phosphor (SP) imaging system. Ten digital cephalograms and their conventional counterpart, hardcopy on a transparent blue film, were obtained by a SP imaging system and a dye sublimation printer. Twelve orthodontic residents identified 19 cephalometric landmarks on monitor-displayed SP digital images with computer-aided method and on their hardcopies with conventional method. The x- and y-coordinates for each landmark, indicating the horizontal and vertical positions, were analysed to assess the reliability of landmark identification and evaluate the concordance of the landmark locations in softcopy and hardcopy of SP digital cephalometric radiography. For each of the 19 landmarks, the location differences as well as the horizontal and vertical components were statistically significant between SP digital cephalometric radiography and its hardcopy. Smaller interobserver errors on SP digital images than those on their hardcopies were noted for all the landmarks, except point Go in vertical direction. The scatter-plots demonstrate the characteristic distribution of the interobserver error in both horizontal and vertical directions. Generally, the dispersion of interobserver error on SP digital cephalometric radiography is less than that on its hardcopy with conventional method. The SP digital cephalometric radiography could yield better or comparable level of performance in landmark identification as its hardcopy, except point Go in vertical direction.

  16. Conditions that influence the accuracy of anthropometric parameter estimation for human body segments using shape-from-silhouette

    NASA Astrophysics Data System (ADS)

    Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.

    2005-01-01

    Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).

  17. Optimizing pattern recognition-based control for partial-hand prosthesis application.

    PubMed

    Earley, Eric J; Adewuyi, Adenike A; Hargrove, Levi J

    2014-01-01

    Partial-hand amputees often retain good residual wrist motion, which is essential for functional activities involving use of the hand. Thus, a crucial design criterion for a myoelectric, partial-hand prosthesis control scheme is that it allows the user to retain residual wrist motion. Pattern recognition (PR) of electromyographic (EMG) signals is a well-studied method of controlling myoelectric prostheses. However, wrist motion degrades a PR system's ability to correctly predict hand-grasp patterns. We studied the effects of (1) window length and number of hand-grasps, (2) static and dynamic wrist motion, and (3) EMG muscle source on the ability of a PR-based control scheme to classify functional hand-grasp patterns. Our results show that training PR classifiers with both extrinsic and intrinsic muscle EMG yields a lower error rate than training with either group by itself (p<0.001); and that training in only variable wrist positions, with only dynamic wrist movements, or with both variable wrist positions and movements results in lower error rates than training in only the neutral wrist position (p<0.001). Finally, our results show that both an increase in window length and a decrease in the number of grasps available to the classifier significantly decrease classification error (p<0.001). These results remained consistent whether the classifier selected or maintained a hand-grasp.

  18. Data Envelopment Analysis in the Presence of Measurement Error: Case Study from the National Database of Nursing Quality Indicators® (NDNQI®)

    PubMed Central

    Gajewski, Byron J.; Lee, Robert; Dunton, Nancy

    2012-01-01

    Data Envelopment Analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency (Hollingsworth, 2008), but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized (Gajewski, Lee, Bott, Piamjariyakul and Taunton, 2009; Ruggiero, 2004). We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® (NDNQI®) to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible. PMID:23328796

  19. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  20. Bayesian truncation errors in chiral effective field theory: model checking and accounting for correlations

    NASA Astrophysics Data System (ADS)

    Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick

    2017-09-01

    Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.

  1. Fault tolerance with noisy and slow measurements and preparation.

    PubMed

    Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason

    2010-09-03

    It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.

  2. Intraoperative visualisation of functional structures facilitates safe frameless stereotactic biopsy in the motor eloquent regions of the brain.

    PubMed

    Zhang, Jia-Shu; Qu, Ling; Wang, Qun; Jin, Wei; Hou, Yuan-Zheng; Sun, Guo-Chen; Li, Fang-Ye; Yu, Xin-Guang; Xu, Ban-Nan; Chen, Xiao-Lei

    2017-12-20

    For stereotactic brain biopsy involving motor eloquent regions, the surgical objective is to enhance diagnostic yield and preserve neurological function. To achieve this aim, we implemented functional neuro-navigation and intraoperative magnetic resonance imaging (iMRI) into the biopsy procedure. The impact of this integrated technique on the surgical outcome and postoperative neurological function was investigated and evaluated. Thirty nine patients with lesions involving motor eloquent structures underwent frameless stereotactic biopsy assisted by functional neuro-navigation and iMRI. Intraoperative visualisation was realised by integrating anatomical and functional information into a navigation framework to improve biopsy trajectories and preserve eloquent structures. iMRI was conducted to guarantee the biopsy accuracy and detect intraoperative complications. The perioperative change of motor function and biopsy error before and after iMRI were recorded, and the role of functional information in trajectory selection and the relationship between the distance from sampling site to nearby eloquent structures and the neurological deterioration were further analyzed. Functional neuro-navigation helped modify the original trajectories and sampling sites in 35.90% (16/39) of cases to avoid the damage of eloquent structures. Even though all the lesions were high-risk of causing neurological deficits, no significant difference was found between preoperative and postoperative muscle strength. After data analysis, 3mm was supposed to be the safe distance for avoiding transient neurological deterioration. During surgery, the use of iMRI significantly reduced the biopsy errors (p = 0.042) and potentially increased the diagnostic yield from 84.62% (33/39) to 94.87% (37/39). Moreover, iMRI detected intraoperative haemorrhage in 5.13% (2/39) of patients, all of them benefited from the intraoperative strategies based on iMRI findings. Intraoperative visualisation of functional structures could be a feasible, safe and effective technique. Combined with intraoperative high-field MRI, it contributed to enhance the biopsy accuracy and lower neurological complications in stereotactic brain biopsy involving motor eloquent areas.

  3. Peak-locking error reduction by birefringent optical diffusers

    NASA Astrophysics Data System (ADS)

    Kislaya, Ankur; Sciacchitano, Andrea

    2018-02-01

    The use of optical diffusers for the reduction of peak-locking errors in particle image velocimetry is investigated. The working principle of the optical diffusers is based on the concept of birefringence, where the incoming rays are subject to different deflections depending on the light direction and polarization. The performances of the diffusers are assessed via wind tunnel measurements in uniform flow and wall-bounded turbulence. Comparison with best-practice image defocusing is also conducted. It is found that the optical diffusers yield an increase of the particle image diameter up to 10 µm in the sensor plane. Comparison with reference measurements showed a reduction of both random and systematic errors by a factor of 3, even at low imaging signal-to-noise ratio.

  4. The uncertainty of crop yield projections is reduced by improved temperature response functions.

    PubMed

    Wang, Enli; Martre, Pierre; Zhao, Zhigan; Ewert, Frank; Maiorano, Andrea; Rötter, Reimund P; Kimball, Bruce A; Ottman, Michael J; Wall, Gerard W; White, Jeffrey W; Reynolds, Matthew P; Alderman, Phillip D; Aggarwal, Pramod K; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andrew J; De Sanctis, Giacomo; Doltra, Jordi; Fereres, Elias; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A; Izaurralde, Roberto C; Jabloun, Mohamed; Jones, Curtis D; Kersebaum, Kurt C; Koehler, Ann-Kristin; Liu, Leilei; Müller, Christoph; Naresh Kumar, Soora; Nendel, Claas; O'Leary, Garry; Olesen, Jørgen E; Palosuo, Taru; Priesack, Eckart; Eyshi Rezaei, Ehsan; Ripoche, Dominique; Ruane, Alex C; Semenov, Mikhail A; Shcherbak, Iurii; Stöckle, Claudio; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wallach, Daniel; Wang, Zhimin; Wolf, Joost; Zhu, Yan; Asseng, Senthold

    2017-07-17

    Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on crop yield, but have large uncertainty in yield simulations. Here, we show that variations in the mathematical functions currently used to simulate temperature responses of physiological processes in 29 wheat models account for >50% of uncertainty in simulated grain yields for mean growing season temperatures from 14 °C to 33 °C. We derived a set of new temperature response functions that when substituted in four wheat models reduced the error in grain yield simulations across seven global sites with different temperature regimes by 19% to 50% (42% average). We anticipate the improved temperature responses to be a key step to improve modelling of crops under rising temperature and climate change, leading to higher skill of crop yield projections.

  5. The Uncertainty of Crop Yield Projections Is Reduced by Improved Temperature Response Functions

    NASA Technical Reports Server (NTRS)

    Wang, Enli; Martre, Pierre; Zhao, Zhigan; Ewert, Frank; Maiorano, Andrea; Rotter, Reimund P.; Kimball, Bruce A.; Ottman, Michael J.; White, Jeffrey W.; Reynolds, Matthew P.; hide

    2017-01-01

    Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on crop yield, but have large uncertainty in yield simulations. Here, we show that variations in the mathematical functions currently used to simulate temperature responses of physiological processes in 29 wheat models account for is greater than 50% of uncertainty in simulated grain yields for mean growing season temperatures from 14 C to 33 C. We derived a set of new temperature response functions that when substituted in four wheat models reduced the error in grain yield simulations across seven global sites with different temperature regimes by 19% to 50% (42% average). We anticipate the improved temperature responses to be a key step to improve modelling of crops under rising temperature and climate change, leading to higher skill of crop yield projections.

  6. Coagulation Function of Stored Whole Blood is Preserved for 14 Days in Austere Conditions: A ROTEM Feasibility Study During a Norwegian Antipiracy Mission and Comparison to Equal Ratio Reconstituted Blood

    DTIC Science & Technology

    2015-06-24

    mechanical piston movements measured by the ROTEM device. Error messages were recorded in 4 (1.5%) of 267 tests. CWB yielded repro- ducible ROTEM results... piston movement analysis, error message frequency, and result variability and (2) compare the clotting properties of cold-stored WB obtained from a walking...signed the selection form, which tracked TTD screening and blood grouping results. That same form doubled as a transfusion form and was used to

  7. Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.

    2006-01-01

    Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.

  8. Usability of NASA Satellite Imagery-Based Daily Solar Radiation for Crop Yield Simulation and Management Decisions

    NASA Astrophysics Data System (ADS)

    Yang, H.; Cassman, K. G.; Stackhouse, P. W.; Hoell, J. M.

    2007-12-01

    We tested the usability of NASA satellite imagery-based daily solar radiation for farm-specific crop yield simulation and management decisions using the Hybrid-Maize model (www.hybridmaize.unl.edu). Solar radiation is one of the key inputs for crop yield simulation. Farm-specific crop management decisions using simulation models require long-term (i.e., 20 years or longer) daily local weather data including solar radiation for assessing crop yield potential and its variation, optimizing crop planting date, and predicting crop yield in a real time mode. Weather stations that record daily solar radiation have sparse coverage and many of them have record shorter than 15 years. Based on satellite imagery and other remote sensed information, NASA has provided estimates of daily climatic data including solar radiation at a resolution of 1 degree grid over the earth surface from 1983 to 2005. NASA is currently continuing to update the database and has plans to provide near real-time data in the future. This database, which is free to the public at http://power.larc.nasa.gov, is a potential surrogate for ground- measured climatic data for farm-specific crop yield simulation and management decisions. In this report, we quantified (1) the similarities between NASA daily solar radiation and ground-measured data atr 20 US sites and four international sites, and (2) the accuracy and precision of simulated corn yield potential and its variability using NASA solar radiation coupled with other weather data from ground measurements. The 20 US sites are in the western Corn Belt, including Iowa, South Dakota, Nebraska, and Kansas. The four international sites are Los Banos in the Philippines, Beijing in China, Cali in Columbia, and Ibatan in Nigeria. Those sites were selected because of their high quality weather record and long duration (more than 20 years on average). We found that NASA solar radiation was highly significantly correlated (mean r2 =0.88**) with the ground measurements at the 20 US sites, while the correlation was poor (mean r2=0.55**, though significant) at the four international sites. At the 20 US sites, the mean root mean square error (RMSE) between NASA solar radiation and the ground data was 2.7 MJ/m2/d, or 19% of the mean daily ground data. At the four international sites, the mean RMSE was 4.0 MJ/m2/d, or 25% of the mean daily ground value. Large differences between NASA solar radiation and the ground data were likely associated with tropical environment or significant variation in elevation within a short distance. When using NASA solar radiation coupled with other weather data from ground measurements, the simulated corn yields were highly significantly correlated (mean r2=0.85**) with those using complete ground weather data at the 20 US sites, while the correlation (mean r2=0.48**) was poor at the four international sites. The mean RMSE between the simulated corn yields of the two batches was 0.50 Mg/ha, or 3% of the mean absolute value using the ground data. At the four international sites, the RMSE of the simulated yields was 1.5 Mg/ha, or 13% of the mean absolute value using the ground data. We conclude that the NASA satellite imagery-based daily solar radiation is a reasonably reliable surrogate for the ground observations for farm-specific crop yield simulation and management decisions, especially at locations where ground-measured solar radiation is unavailable.

  9. Fabrication of five-level ultraplanar micromirror arrays by flip-chip assembly

    NASA Astrophysics Data System (ADS)

    Michalicek, M. Adrian; Bright, Victor M.

    2001-10-01

    This paper reports a detailed study of the fabrication of various piston, torsion, and cantilever style micromirror arrays using a novel, simple, and inexpensive flip-chip assembly technique. Several rectangular and polar arrays were commercially prefabricated in the MUMPs process and then flip-chip bonded to form advanced micromirror arrays where adverse effects typically associated with surface micromachining were removed. These arrays were bonded by directly fusing the MUMPs gold layers with no complex preprocessing. The modules were assembled using a computer-controlled, custom-built flip-chip bonding machine. Topographically opposed bond pads were designed to correct for slight misalignment errors during bonding and typically result in less than 2 micrometers of lateral alignment error. Although flip-chip micromirror performance is briefly discussed, the means used to create these arrays is the focus of the paper. A detailed study of flip-chip process yield is presented which describes the primary failure mechanisms for flip-chip bonding. Studies of alignment tolerance, bonding force, stress concentration, module planarity, bonding machine calibration techniques, prefabrication errors, and release procedures are presented in relation to specific observations in process yield. Ultimately, the standard thermo-compression flip-chip assembly process remains a viable technique to develop highly complex prototypes of advanced micromirror arrays.

  10. Calculating sediment discharge from a highway construction site in central Pennsylvania

    USGS Publications Warehouse

    Reed, L.A.; Ward, J.R.; Wetzel, K.L.

    1985-01-01

    The Pennsylvania Department of Transportation, the Federal Highway Administration, and the U.S. Geological Survey have cooperated in a study to evaluate two methods of predicting sediment yields during highway construction. Sediment yields were calculated using the Universal Soil Loss and the Younkin Sediment Prediction Equations. Results were compared to the actual measured values, and standard errors and coefficients of correlation were calculated. Sediment discharge from the construction area was determined for storms that occurred during construction of Interstate 81 in a 0.38-square mile basin near Harrisburg, Pennsylvania. Precipitation data tabulated included total rainfall, maximum 30-minute rainfall, kinetic energy, and the erosive index of the precipitation. Highway construction data tabulated included the area disturbed by clearing and grubbing, the area in cuts and fills, the average depths of cuts and fills, the area seeded and mulched, and the area paved. Using the Universal Soil Loss Equation, sediment discharge from the construction area was calculated for storms. The standard error of estimate was 0.40 (about 105 percent), and the coefficient of correlation was 0.79. Sediment discharge from the construction area was also calculated using the Younkin Equation. The standard error of estimate of 0.42 (about 110 percent), and the coefficient of correlation of 0.77 are comparable to those from the Universal Soil Loss Equation.

  11. Aspheric glass lens modeling and machining

    NASA Astrophysics Data System (ADS)

    Johnson, R. Barry; Mandina, Michael

    2005-08-01

    The incorporation of aspheric lenses in complex lens system can provide significant image quality improvement, reduction of the number of lens elements, smaller size, and lower weight. Recently, it has become practical to manufacture aspheric glass lenses using diamond-grinding methods. The evolution of the manufacturing technology is discussed for a specific aspheric glass lens. When a prototype all-glass lens system (80 mm efl, F/2.5) was fabricated and tested, it was observed that the image quality was significantly less than was predicted by the optical design software. The cause of the degradation was identified as the large aspheric element in the lens. Identification was possible by precision mapping of the spatial coordinates of the lens surface and then transforming this data into an appropriate optical surface defined by derived grid sag data. The resulting optical analysis yielded a modeled image consistent with that observed when testing the prototype lens system in the laboratory. This insight into a localized slope-error problem allowed improvements in the fabrication process to be implemented. The second fabrication attempt, the resulting aspheric lens provided remarkable improvement in the observed image quality, although still falling somewhat short of the desired image quality goal. In parallel with the fabrication enhancement effort, optical modeling of the surface was undertaken to determine how much surface error and error types were allowable to achieve the desired image quality goal. With this knowledge, final improvements were made to the fabrication process. The third prototype lens achieved the goal of optical performance. Rapid development of the aspheric glass lens was made possible by the interactive relationship between the optical designer, diamond-grinding personnel, and the metrology personnel. With rare exceptions, the subsequent production lenses were optical acceptable and afforded reasonable manufacturing costs.

  12. Probabilistic Air Segmentation and Sparse Regression Estimated Pseudo CT for PET/MR Attenuation Correction

    PubMed Central

    Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David

    2015-01-01

    Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778

  13. Ready-to-use pre-filled syringes of atropine for anaesthesia care in French hospitals - a budget impact analysis.

    PubMed

    Benhamou, Dan; Piriou, Vincent; De Vaumas, Cyrille; Albaladejo, Pierre; Malinovsky, Jean-Marc; Doz, Marianne; Lafuma, Antoine; Bouaziz, Hervé

    2017-04-01

    Patient safety is improved by the use of labelled, ready-to-use, pre-filled syringes (PFS) when compared to conventional methods of syringe preparation (CMP) of the same product from an ampoule. However, the PFS presentation costs more than the CMP presentation. To estimate the budget impact for French hospitals of switching from atropine in ampoules to atropine PFS for anaesthesia care. A model was constructed to simulate the financial consequences of the use of atropine PFS in operating theatres, taking into account wastage and medication errors. The model tested different scenarios and a sensitivity analysis was performed. In a reference scenario, the systematic use of atropine PFS rather than atropine CMP yielded a net one-year budget saving of €5,255,304. Medication errors outweighed other cost factors relating to the use of atropine CMP (€9,425,448). Avoidance of wastage in the case of atropine CMP (prepared and unused) was a major source of savings (€1,167,323). Significant savings were made by means of other scenarios examined. The sensitivity analysis suggests that the results obtained are robust and stable for a range of parameter estimates and assumptions. The financial model was based on data obtained from the literature and expert opinions. The budget impact analysis shows that even though atropine PFS is more expensive than atropine CMP, its use would lead to significant cost savings. Savings would mainly be due to fewer medication errors and their associated consequences and the absence of wastage when atropine syringes are prepared in advance. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  14. A Quadratic Spring Equation

    ERIC Educational Resources Information Center

    Fay, Temple H.

    2010-01-01

    Through numerical investigations, we study examples of the forced quadratic spring equation [image omitted]. By performing trial-and-error numerical experiments, we demonstrate the existence of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions, investigate the resonance boundary in the [omega]…

  15. Too True to be Bad

    PubMed Central

    Etz, Alexander J.

    2017-01-01

    Psychology journals rarely publish nonsignificant results. At the same time, it is often very unlikely (or “too good to be true”) that a set of studies yields exclusively significant results. Here, we use likelihood ratios to explain when sets of studies that contain a mix of significant and nonsignificant results are likely to be true or “too true to be bad.” As we show, mixed results are not only likely to be observed in lines of research but also, when observed, often provide evidence for the alternative hypothesis, given reasonable levels of statistical power and an adequately controlled low Type 1 error rate. Researchers should feel comfortable submitting such lines of research with an internal meta-analysis for publication. A better understanding of probabilities, accompanied by more realistic expectations of what real sets of studies look like, might be an important step in mitigating publication bias in the scientific literature. PMID:29276574

  16. Eigenvalue computations with the QUAD4 consistent-mass matrix

    NASA Technical Reports Server (NTRS)

    Butler, Thomas A.

    1990-01-01

    The NASTRAN user has the option of using either a lumped-mass matrix or a consistent- (coupled-) mass matrix with the QUAD4 shell finite element. At the Sixteenth NASTRAN Users' Colloquium (1988), Melvyn Marcus and associates of the David Taylor Research Center summarized a study comparing the results of the QUAD4 element with results of other NASTRAN shell elements for a cylindrical-shell modal analysis. Results of this study, in which both the lumped-and consistent-mass matrix formulations were used, implied that the consistent-mass matrix yielded poor results. In an effort to further evaluate the consistent-mass matrix, a study was performed using both a cylindrical-shell geometry and a flat-plate geometry. Modal parameters were extracted for several modes for both geometries leading to some significant conclusions. First, there do not appear to be any fundamental errors associated with the consistent-mass matrix. However, its accuracy is quite different for the two different geometries studied. The consistent-mass matrix yields better results for the flat-plate geometry and the lumped-mass matrix seems to be the better choice for cylindrical-shell geometries.

  17. Psychosocial work environment and health in U.S. metropolitan areas: a test of the demand-control and demand-control-support models.

    PubMed

    Muntaner, C; Schoenbach, C

    1994-01-01

    The authors use confirmatory factor analysis to investigate the psychosocial dimensions of work environments relevant to health outcomes, in a representative sample of five U.S. metropolitan areas. Through an aggregated inference system, scales from Schwartz and associates' job scoring system and from the Dictionary of Occupational Titles (DOT) were employed to examine two alternative models: the demand-control model of Karasek and Theorell and Johnson's demand-control-support model. Confirmatory factor analysis was used to test the two models. The two multidimensional models yielded better fits than an unstructured model. After allowing for the measurement error variance due to the method of assessment (Schwartz and associates' system or DOT), both models yielded acceptable goodness-of-fit indices, but the fit of the demand-control-support model was significantly better. Overall these results indicate that the dimensions of Control (substantive complexity of work, skill discretion, decision authority), Demands (physical exertion, physical demands and hazards), and Social Support (coworker and supervisor social supports) provide an acceptable account of the psychosocial dimensions of work associated with health outcomes.

  18. Enhanced sequencing coverage with digital droplet multiple displacement amplification

    PubMed Central

    Sidore, Angus M.; Lan, Freeman; Lim, Shaun W.; Abate, Adam R.

    2016-01-01

    Sequencing small quantities of DNA is important for applications ranging from the assembly of uncultivable microbial genomes to the identification of cancer-associated mutations. To obtain sufficient quantities of DNA for sequencing, the small amount of starting material must be amplified significantly. However, existing methods often yield errors or non-uniform coverage, reducing sequencing data quality. Here, we describe digital droplet multiple displacement amplification, a method that enables massive amplification of low-input material while maintaining sequence accuracy and uniformity. The low-input material is compartmentalized as single molecules in millions of picoliter droplets. Because the molecules are isolated in compartments, they amplify to saturation without competing for resources; this yields uniform representation of all sequences in the final product and, in turn, enhances the quality of the sequence data. We demonstrate the ability to uniformly amplify the genomes of single Escherichia coli cells, comprising just 4.7 fg of starting DNA, and obtain sequencing coverage distributions that rival that of unamplified material. Digital droplet multiple displacement amplification provides a simple and effective method for amplifying minute amounts of DNA for accurate and uniform sequencing. PMID:26704978

  19. Heterogeneity in Trauma Registry Data Quality: Implications for Regional and National Performance Improvement in Trauma.

    PubMed

    Dente, Christopher J; Ashley, Dennis W; Dunne, James R; Henderson, Vernon; Ferdinand, Colville; Renz, Barry; Massoud, Romeo; Adamski, John; Hawke, Thomas; Gravlee, Mark; Cascone, John; Paynter, Steven; Medeiros, Regina; Atkins, Elizabeth; Nicholas, Jeffrey M

    2016-03-01

    Led by the American College of Surgeons Trauma Quality Improvement Program, performance improvement efforts have expanded to regional and national levels. The American College of Surgeons Trauma Quality Improvement Program recommends 5 audit filters to identify records with erroneous data, and the Georgia Committee on Trauma instituted standardized audit filter analysis in all Level I and II trauma centers in the state. Audit filter reports were performed from July 2013 to September 2014. Records were reviewed to determine whether there was erroneous data abstraction. Percent yield was defined as number of errors divided by number of charts captured. Twelve centers submitted complete datasets. During 15 months, 21,115 patient records were subjected to analysis. Audit filter captured 2,901 (14%) records and review yielded 549 (2.5%) records with erroneous data. Audit filter 1 had the highest number of records identified and audit filter 3 had the highest percent yield. Individual center error rates ranged from 0.4% to 5.2%. When comparing quarters 1 and 2 with quarters 4 and 5, there were 7 of 12 centers with substantial decreases in error rates. The most common missed complications were pneumonia, urinary tract infection, and acute renal failure. The most common missed comorbidities were hypertension, diabetes, and substance abuse. In Georgia, the prevalence of erroneous data in trauma registries varies among centers, leading to heterogeneity in data quality, and suggests that targeted educational opportunities exist at the institutional level. Standardized audit filter assessment improved data quality in the majority of participating centers. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Canopy Chlorophyll Density Based Index for Estimating Nitrogen Status and Predicting Grain Yield in Rice

    PubMed Central

    Liu, Xiaojun; Zhang, Ke; Zhang, Zeyu; Cao, Qiang; Lv, Zunfu; Yuan, Zhaofeng; Tian, Yongchao; Cao, Weixing; Zhu, Yan

    2017-01-01

    Canopy chlorophyll density (Chl) has a pivotal role in diagnosing crop growth and nutrition status. The purpose of this study was to develop Chl based models for estimating N status and predicting grain yield of rice (Oryza sativa L.) with Leaf area index (LAI) and Chlorophyll concentration of the upper leaves. Six field experiments were conducted in Jiangsu Province of East China during 2007, 2008, 2009, 2013, and 2014. Different N rates were applied to generate contrasting conditions of N availability in six Japonica cultivars (9915, 27123, Wuxiangjing 14, Wuyunjing 19, Yongyou 8, and Wuyunjing 24) and two Indica cultivars (Liangyoupei 9, YLiangyou 1). The SPAD values of the four uppermost leaves and LAI were measured from tillering to flowering growth stages. Two N indicators, leaf N accumulation (LNA) and plant N accumulation (PNA) were measured. The LAI estimated by LAI-2000 and LI-3050C were compared and calibrated with a conversion equation. A linear regression analysis showed significant relationships between Chl value and N indicators, the equations were as follows: PNA = (0.092 × Chl) − 1.179 (R2 = 0.94, P < 0.001, relative root mean square error (RRMSE) = 0.196), LNA = (0.052 × Chl) − 0.269 (R2 = 0.93, P < 0.001, RRMSE = 0.185). Standardized method was used to quantity the correlation between Chl value and grain yield, normalized yield = (0.601 × normalized Chl) + 0.400 (R2 = 0.81, P < 0.001, RRMSE = 0.078). Independent experimental data also validated the use of Chl value to accurately estimate rice N status and predict grain yield. PMID:29163568

  1. Understanding fatal older road user crash circumstances and risk factors.

    PubMed

    Koppel, Sjaan; Bugeja, Lyndal; Smith, Daisy; Lamb, Ashne; Dwyer, Jeremy; Fitzharris, Michael; Newstead, Stuart; D'Elia, Angelo; Charlton, Judith

    2018-02-28

    This study used medicolegal data to investigate fatal older road user (ORU) crash circumstances and risk factors relating to four key components of the Safe System approach (e.g., roads and roadsides, vehicles, road users, and speeds) to identify areas of priority for targeted prevention activity. The Coroners Court of Victoria's Surveillance Database was searched to identify coronial records with at least one deceased ORU in the state of Victoria, Australia, for 2013-2014. Information relating to the ORU, crash characteristics and circumstances, and risk factors was extracted and analyzed. The average rate of fatal ORU crashes per 100,000 population was 8.1 (95% confidence interval [CI] 6.0-10.2), which was more than double the average rate of fatal middle-aged road user crashes (3.6, 95% CI 2.5-4.6). There was a significant relationship between age group and deceased road user type (χ 2 (15, N = 226) = 3.56, p < 0.001). The proportion of deceased drivers decreased with age, whereas the proportion of deceased pedestrians increased with age. The majority of fatal ORU crashes involved a counterpart (another vehicle: 59.4%; fixed/stationary object: 25.4%), and occurred "on road" (87.0%), on roads that were paved (94.2%), dry (74.2%), and had light traffic volume (38.3%). Road user error was identified by the police and/or coroner for the majority of fatal ORU crashes (57.9%), with a significant proportion of deceased ORU deemed to have "misjudged" (40.9%) or "failed to yield" (37.9%). Road user error was the most significant risk factor identified in fatal ORU crashes, which suggests that there is a limited capacity of the Victorian road system to fully accommodate road user errors. Initiatives related to safer roads and roadsides, vehicles, and speed zones, as well as behavioral approaches, are key areas of priority for targeted activity to prevent fatal older road user crashes in the future.

  2. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  3. Correlation of Head Impacts to Change in Balance Error Scoring System Scores in Division I Men's Lacrosse Players.

    PubMed

    Miyashita, Theresa L; Diakogeorgiou, Eleni; Marrie, Kaitlyn

    Investigation into the effect of cumulative subconcussive head impacts has yielded various results in the literature, with many supporting a link to neurological deficits. Little research has been conducted on men's lacrosse and associated balance deficits from head impacts. (1) Athletes will commit more errors on the postseason Balance Error Scoring System (BESS) test. (2) There will be a positive correlation to change in BESS scores and head impact exposure data. Prospective longitudinal study. Level 3. Thirty-four Division I men's lacrosse players (age, 19.59 ± 1.42 years) wore helmets instrumented with a sensor to collect head impact exposure data over the course of a competitive season. Players completed a BESS test at the start and end of the competitive season. The number of errors from pre- to postseason increased during the double-leg stance on foam ( P < 0.001), tandem stance on foam ( P = 0.009), total number of errors on a firm surface ( P = 0.042), and total number of errors on a foam surface ( P = 0.007). There were significant correlations only between the total errors on a foam surface and linear acceleration ( P = 0.038, r = 0.36), head injury criteria ( P = 0.024, r = 0.39), and Gadd Severity Index scores ( P = 0.031, r = 0.37). Changes in the total number of errors on a foam surface may be considered a sensitive measure to detect balance deficits associated with cumulative subconcussive head impacts sustained over the course of 1 lacrosse season, as measured by average linear acceleration, head injury criteria, and Gadd Severity Index scores. If there is microtrauma to the vestibular system due to repetitive subconcussive impacts, only an assessment that highly stresses the vestibular system may be able to detect these changes. Cumulative subconcussive impacts may result in neurocognitive dysfunction, including balance deficits, which are associated with an increased risk for injury. The development of a strategy to reduce total number of head impacts may curb the associated sequelae. Incorporation of a modified BESS test, firm surface only, may not be recommended as it may not detect changes due to repetitive impacts over the course of a competitive season.

  4. Linguistic Determinants of the Difficulty of True-False Test Items

    ERIC Educational Resources Information Center

    Peterson, Candida C.; Peterson, James L.

    1976-01-01

    Adults read a prose passage and responded to passages based on it which were either true or false and were phrased either affirmatively or negatively. True negatives yielded most errors, followed in order by false negatives, true affirmatives, and false affirmatives. (Author/RC)

  5. Relative L-shell X-ray intensities of Pt, Pb and Bi following ionization by 59.54 keV γ-rays

    NASA Astrophysics Data System (ADS)

    Dhal, B. B.; Padhi, H. C.

    1994-12-01

    Relative L-shell X-ray intensities of Pt, Pb and Bi have been measured following ionization by 59.54 keV photons from an 241 Am point source. The measured ratios have been compared with the theoretical ratios estimated using the photoionization cross-sections of Scofield and different decay yield data. The comparison shows good agreement for Pb and Bi with the decay yield data of Krause, but the decay yield data of Xu and Xu overestimates the ratios, particularly for the {I γ}/{I α} ratio. Our results for Pb and Bi with improved error limits also agree with the previous experimental results of Shatendra et al. For Pt our present results are found to lie between the two theoretical results obtained by using different sets of decay yield data.

  6. [Adaptability of APSIM model in Southwestern China: A case study of winter wheat in Chongqing City].

    PubMed

    Dai, Tong; Wang, Jing; He, Di; Zhang, Jian-ping; Wang, Na

    2015-04-01

    Field experimental data of winter wheat and parallel daily meteorological data at four typical stations in Chongqing City were used to calibrate and validate APSIM-wheat model and determine the genetic parameters for 12 varieties of winter wheat. The results showed that there was a good agreement between the simulated and observed growth periods from sowing to emergence, flowering and maturity of wheat. Root mean squared errors (RMSEs) between simulated and observed emergence, flowering and maturity were 0-3, 1-8, and 0-8 d, respectively. Normalized root mean squared errors (NRMSEs) between simulated and observed above-ground biomass for 12 study varieties were less than 30%. NRMSE between simulated and observed yields for 10 varieties out of 12 study varieties were less than 30%. APSIM-wheat model performed well in simulating phenology, aboveground biomass and yield of winter wheat in Chongqing City, which could provide a foundational support for assessing the impact of climate change on wheat production in the study area based on the model.

  7. LAI inversion from optical reflectance using a neural network trained with a multiple scattering model

    NASA Technical Reports Server (NTRS)

    Smith, James A.

    1992-01-01

    The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.

  8. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  9. Effect of the mandible on mouthguard measurements of head kinematics.

    PubMed

    Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B

    2016-06-14

    Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Multimodel ensembles of wheat growth: many models are better than one.

    PubMed

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W; Rötter, Reimund P; Boote, Kenneth J; Ruane, Alex C; Thorburn, Peter J; Cammarano, Davide; Hatfield, Jerry L; Rosenzweig, Cynthia; Aggarwal, Pramod K; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richie; Grant, Robert F; Heng, Lee; Hooker, Josh; Hunt, Leslie A; Ingwersen, Joachim; Izaurralde, Roberto C; Kersebaum, Kurt Christian; Müller, Christoph; Kumar, Soora Naresh; Nendel, Claas; O'leary, Garry; Olesen, Jørgen E; Osborne, Tom M; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Semenov, Mikhail A; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; White, Jeffrey W; Wolf, Joost

    2015-02-01

    Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models. © 2014 John Wiley & Sons Ltd.

  11. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  12. Multimodel Ensembles of Wheat Growth: More Models are Better than One

    NASA Technical Reports Server (NTRS)

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alex C.; Thorburn, Peter J.; Cammarano, Davide; hide

    2015-01-01

    Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.

  13. Multimodel Ensembles of Wheat Growth: Many Models are Better than One

    NASA Technical Reports Server (NTRS)

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alexander C.; Thorburn, Peter J.; Cammarano, Davide; hide

    2015-01-01

    Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop model scan give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 2438 for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.

  14. Tissue resistivity estimation in the presence of positional and geometrical uncertainties.

    PubMed

    Baysal, U; Eyüboğlu, B M

    2000-08-01

    Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.

  15. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  16. Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites

    USGS Publications Warehouse

    Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.

    2005-01-01

    The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.

  17. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  18. Error behaviors associated with loss of competency in Alzheimer's disease.

    PubMed

    Marson, D C; Annis, S M; McInturff, B; Bartolucci, A; Harrell, L E

    1999-12-10

    To investigate qualitative behavioral changes associated with declining medical decision-making capacity (competency) in patients with AD. Qualitative measures can yield clinical information about functional changes in neurologic disease not available through quantitative measures. Normal older controls (n = 21) and patients with mild and moderate probable AD (n = 72) were compared using a standardized competency measure and neuropsychological measures. A system of 16 qualitative error scores representing conceptual domains of language, executive dysfunction, affective dysfunction, and compensatory responses was used to analyze errors produced on the competency measure. Patterns of errors were examined across groups. Relationships between error behaviors and competency performance were determined, and neurocognitive correlates of specific error behaviors were identified. AD patients demonstrated more miscomprehension, factual confusion, intrusions, incoherent responses, nonresponsive answers, loss of task, and delegation than controls. Errors in the executive domain (loss of task, nonresponsive answer, and loss of detachment) were key predictors of declining competency performance by AD patients. Neuropsychological analyses in the AD group generally confirmed the conceptual domain assignments of the qualitative scores. Loss of task, nonresponsive answers, and loss of detachment were key behavioral changes associated with declining competency of AD patients and with neurocognitive measures of executive dysfunction. These findings support the growing linkage between executive dysfunction and competency loss.

  19. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Extraction of the proton radius from electron-proton scattering data

    DOE PAGES

    Lee, Gabriel; Arrington, John R.; Hill, Richard J.

    2015-07-27

    We perform a new analysis of electron-proton scattering data to determine the proton electric and magnetic radii, enforcing model-independent constraints from form factor analyticity. A wide-ranging study of possible systematic effects is performed. An improved analysis is developed that rebins data taken at identical kinematic settings and avoids a scaling assumption of systematic errors with statistical errors. Employing standard models for radiative corrections, our improved analysis of the 2010 Mainz A1 Collaboration data yields a proton electric radius r E = 0.895(20) fm and magnetic radius r M = 0.776(38) fm. A similar analysis applied to world data (excluding Mainzmore » data) implies r E = 0.916(24) fm and r M = 0.914(35) fm. The Mainz and world values of the charge radius are consistent, and a simple combination yields a value r E = 0.904(15) fm that is 4σ larger than the CREMA Collaboration muonic hydrogen determination. The Mainz and world values of the magnetic radius differ by 2.7σ, and a simple average yields r M = 0.851(26) fm. As a result, the circumstances under which published muonic hydrogen and electron scattering data could be reconciled are discussed, including a possible deficiency in the standard radiative correction model which requires further analysis.« less

  1. Photon-number-splitting versus cloning attacks in practical implementations of the Bennett-Brassard 1984 protocol for quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niederberger, Armand; Scarani, Valerio; Gisin, Nicolas

    2005-04-01

    In practical quantum cryptography, the source sometimes produces multiphoton pulses, thus enabling the eavesdropper Eve to perform the powerful photon-number-splitting (PNS) attack. Recently, it was shown by Curty and Luetkenhaus [Phys. Rev. A 69, 042321 (2004)] that the PNS attack is not always the optimal attack when two photons are present: if errors are present in the correlations Alice-Bob and if Eve cannot modify Bob's detection efficiency, Eve gains a larger amount of information using another attack based on a 2{yields}3 cloning machine. In this work, we extend this analysis to all distances Alice-Bob. We identify a new incoherent 2{yields}3more » cloning attack which performs better than those described before. Using it, we confirm that, in the presence of errors, Eve's better strategy uses 2{yields}3 cloning attacks instead of the PNS. However, this improvement is very small for the implementations of the Bennett-Brassard 1984 (BB84) protocol. Thus, the existence of these new attacks is conceptually interesting but basically does not change the value of the security parameters of BB84. The main results are valid both for Poissonian and sub-Poissonian sources.« less

  2. Random Forests for Global and Regional Crop Yield Predictions.

    PubMed

    Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung

    2016-01-01

    Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.

  3. Assessing uncertainties in crop and pasture ensemble model simulations of productivity and N2 O emissions.

    PubMed

    Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing

    2018-02-01

    Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.

  4. New method to estimate paleoprecipitation using fossil amphibians and reptiles and the middle and late Miocene precipitation gradients in Europe

    NASA Astrophysics Data System (ADS)

    Böhme, M.; Ilg, A.; Ossig, A.; Küchenhoff, H.

    2006-06-01

    Existing methods for determining paleoprecipitation are subject to large errors (±350 400 mm or more using mammalian proxies), or are restricted to wet climate systems due to their strong facies dependence (paleobotanical proxies). Here we describe a new paleoprecipitation tool based on an indexing of ecophysiological groups within herpetological communities. In recent communities these indices show a highly significant correlation to annual precipitation (r2 = 0.88), and yield paleoprecipitation estimates with average errors of ±250 280 mm. The approach was validated by comparison with published paleoprecipitation estimates from other methods. The method expands the application of paleoprecipitation tools to dry climate systems and in this way contributes to the establishment of a more comprehensive paleoprecipitation database. This method is applied to two high-resolution time intervals from the European Neogene: the early middle Miocene (early Langhian) and the early late Miocene (early Tortonian). The results indicate that both periods show significant meridional precipitation gradients in Europe, these being stronger in the early Langhian (threefold decrease toward the south) than in the early Tortonian (twofold decrease toward the south). This pattern indicates a strengthening of climatic belts during the middle Miocene climatic optimum due to Southern Hemisphere cooling and an increased contribution of Arctic low-pressure cells to the precipitation from the late Miocene onward due to Northern Hemisphere cooling.

  5. Quantifying Differences in the Impact of Variable Chemistry on Equilibrium Uranium(VI) Adsorption Properties of Aquifer Sediments

    PubMed Central

    2011-01-01

    Uranium adsorption–desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500–1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO22+ + 2CO32- = >SOUO2(CO3HCO3)2–, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logKc) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logKc values. Using this approach, logKc values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (<0.063 mm) of another could be demonstrated despite the fines requiring a different reaction stoichiometry. Estimates of logKc uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors. PMID:21923109

  6. Quantifying differences in the impact of variable chemistry on equilibrium Uranium(VI) adsorption properties of aquifer sediments.

    PubMed

    Stoliker, Deborah L; Kent, Douglas B; Zachara, John M

    2011-10-15

    Uranium adsorption-desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500-1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO₂²⁺ + 2CO₃²⁻ = >SOUO₂(CO₃HCO₃)²⁻, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logK(c)) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logK(c) values. Using this approach, logK(c) values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (< 0.063 mm) of another could be demonstrated despite the fines requiring a different reaction stoichiometry. Estimates of logK(c) uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors.

  7. Development of Physics-Based Hurricane Wave Response Functions: Application to Selected Sites on the U.S. Gulf Coast

    NASA Astrophysics Data System (ADS)

    McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.

    2013-12-01

    Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.

  8. Yield estimation of corn based on multitemporal LANDSAT-TM data as input for an agrometeorological model

    NASA Astrophysics Data System (ADS)

    Bach, Heike

    1998-07-01

    In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.

  9. Understanding the dynamics of correct and error responses in free recall: evidence from externalized free recall.

    PubMed

    Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J

    2010-06-01

    The dynamics of correct and error responses in a variant of delayed free recall were examined in the present study. In the externalized free recall paradigm, participants were presented with lists of words and were instructed to subsequently recall not only the words that they could remember from the most recently presented list, but also any other words that came to mind during the recall period. Externalized free recall is useful for elucidating both sampling and postretrieval editing processes, thereby yielding more accurate estimates of the total number of error responses, which are typically sampled and subsequently edited during free recall. The results indicated that the participants generally sampled correct items early in the recall period and then transitioned to sampling more erroneous responses. Furthermore, the participants generally terminated their search after sampling too many errors. An examination of editing processes suggested that the participants were quite good at identifying errors, but this varied systematically on the basis of a number of factors. The results from the present study are framed in terms of generate-edit models of free recall.

  10. Injecting Errors for Testing Built-In Test Software

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  11. Improved method for implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, F. B.; Martin, W. R.

    2001-01-01

    The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and themore » accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.« less

  12. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  13. Fault and Error Latency Under Real Workload: an Experimental Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram

    1986-01-01

    A practical methodology for the study of fault and error latency is demonstrated under a real workload. This is the first study that measures and quantifies the latency under real workload and fills a major gap in the current understanding of workload-failure relationships. The methodology is based on low level data gathered on a VAX 11/780 during the normal workload conditions of the installation. Fault occurrence is simulated on the data, and the error generation and discovery process is reconstructed to determine latency. The analysis proceeds to combine the low level activity data with high level machine performance data to yield a better understanding of the phenomena. A strong relationship exists between latency and workload and that relationship is quantified. The sampling and reconstruction techniques used are also validated. Error latency in the memory where the operating system resides was studied using data on the physical memory access. Fault latency in the paged section of memory was determined using data from physical memory scans. Error latency in the microcontrol store was studied using data on the microcode access and usage.

  14. Corrigendum to "Sinusoidal potential cycling operation of a direct ethanol fuel cell to improving carbon dioxide yields" [J. Power Sources 268 (5 December 2014) 439-442

    NASA Astrophysics Data System (ADS)

    Majidi, Pasha; Pickup, Peter G.

    2016-09-01

    The authors regret that Equation (5) is incorrect and has resulted in errors in Fig. 4 and the efficiencies stated on p. 442. The corrected equation, figure and text are presented below. In addition, the title should be 'Sinusoidal potential cycling operation of a direct ethanol fuel cell to improve carbon dioxide yields', and the reversible cell potential quoted on p. 441 should be 1.14 V. The authors would like to apologise for any inconvenience caused.

  15. Retrieving the Vertical Structure of the Effective Aerosol Complex Index of Refraction from a Combination of Aerosol in Situ and Remote Sensing Measurements During TARFOX

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Turco, R. P.; Liou, K. N.; Russell, P. B.; Bergstrom, R. W.; Schmid, B.; Livingston, J. M.; Hobbs, P. V.; Hartley, W. S.; Ismail, S.

    2000-01-01

    The largest uncertainty in estimates of the effects of atmospheric aerosols on climate stems from uncertainties in the determination of their microphysical properties, including the aerosol complex index of refraction, which in turn determines their optical properties. A novel technique is used to estimate the aerosol complex index of refraction in distinct vertical layers from a combination of aerosol in situ size distribution and remote sensing measurements during the Tropospheric Aerosol Radiative Forcing Observational Experiment (TARFOX). In particular, aerosol backscatter measurements using the NASA Langley LASE (Lidar Atmospheric Sensing Experiment) instrument and in situ aerosol size distribution data are utilized to derive vertical profiles of the 'effective' aerosol complex index of refraction at 815 nm (i.e., the refractive index that would provide the same backscatter signal in a forward calculation on the basis of the measured in situ particle size distributions for homogeneous, spherical aerosols). A sensitivity study shows that this method yields small errors in the retrieved aerosol refractive indices, provided the errors in the lidar derived aerosol backscatter are less than 30% and random in nature. Absolute errors in the estimated aerosol refractive indices are generally less than 0.04 for the real part and can be as much as 0.042 for the imaginary part in the case of a 30% error in the lidar-derived aerosol backscatter. The measurements of aerosol optical depth from the NASA Ames Airborne Tracking Sunphotometer (AATS-6) are successfully incorporated into the new technique and help constrain the retrieved aerosol refractive indices. An application of the technique to two TARFOX case studies yields the occurrence of vertical layers of distinct aerosol refractive indices. Values of the estimated complex aerosol refractive index range from 1.33 to 1.45 for the real part and 0.001 to 0.008 for the imaginary part. The methodology devised in this study provides, for the first time a complete set of vertically resolved aerosol size distribution and refractive index data, yielding the vertical distribution of aerosol optical properties required for the determination of aersol-induced radiative flux changes

  16. Retrieving the Vertical Structure of the Effective Aerosol Complex Index of Refraction from a Combination of Aerosol in Situ and Remote Sensing Measurements During TARFOX

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Turco, R. P.; Liou, K. N.; Russell, P. B.; Bergstrom, R. W.; Schmid, B.; Livingston, J. M.; Hobbs, P. V.; Hartley, W. S.; Ismail, S.; hide

    2000-01-01

    The largest uncertainty in estimates of the effects of atmospheric aerosols on climate stems from uncertainties in the determination of their microphysical properties, including the aerosol complex index of refraction, which in turn determines their optical properties. A novel technique is used to estimate the aerosol complex index of refraction in distinct vertical layers from a combination of aerosol in situ size distribution and remote sensing measurements during the Tropospheric Aerosol Radiative Forcing Observational Experiment (TARFOX). In particular, aerosol backscatter measurements using the NASA Langley LASE (Lidar Atmospheric Sensing Experiment) instrument and in situ aerosol size distribution data are utilized to derive vertical profiles of the "effective" aerosol complex index of refraction at 815 nm (i.e., the refractive index that would provide the same backscatter signal in a forward calculation on the basis of the measured in situ particle size distributions for homogeneous, spherical aerosols). A sensitivity study shows that this method yields small errors in the retrieved aerosol refractive indices, provided the errors in the lidar-derived aerosol backscatter are less than 30% and random in nature. Absolute errors in the estimated aerosol refractive indices are generally less than 0.04 for the real part and can be as much as 0.042 for the imaginary part in the case of a 30% error in the lidar-derived aerosol backscatter. The measurements of aerosol optical depth from the NASA Ames Airborne Tracking Sunphotometer (AATS-6) are successfully incorporated into the new technique and help constrain the retrieved aerosol refractive indices. An application of the technique to two TARFOX case studies yields the occurrence of vertical layers of distinct aerosol refractive indices. Values of the estimated complex aerosol refractive index range from 1.33 to 1.45 for the real part and 0.001 to 0.008 for the imaginary part. The methodology devised in this study provides, for the first time, a complete set of vertically resolved aerosol size distribution and refractive index data. yielding the vertical distribution of aerosol optical properties required for the determination of aerosol-induced radiative flux changes.

  17. Accuracy assessment in the Large Area Crop Inventory Experiment

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.

    1979-01-01

    The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.

  18. Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations

    NASA Technical Reports Server (NTRS)

    Perigaud, Claire; Zlotnicki, Victor

    1992-01-01

    To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.

  19. Avoiding common pitfalls in qualitative data collection and transcription.

    PubMed

    Easton, K L; McComish, J F; Greenberg, R

    2000-09-01

    The subjective nature of qualitative research necessitates scrupulous scientific methods to ensure valid results. Although qualitative methods such as grounded theory, phenomenology, and ethnography yield rich data, consumers of research need to be able to trust the findings reported in such studies. Researchers are responsible for establishing the trustworthiness of qualitative research through a variety of ways. Specific challenges faced in the field can seriously threaten the dependability of the data. However, by minimizing potential errors that can occur when doing fieldwork, researchers can increase the trustworthiness of the study. The purpose of this article is to present three of the pitfalls that can occur in qualitative research during data collection and transcription: equipment failure, environmental hazards, and transcription errors. Specific strategies to minimize the risk for avoidable errors will be discussed.

  20. Measurement of solar radius changes

    NASA Technical Reports Server (NTRS)

    Labonte, B. J.; Howard, R.

    1981-01-01

    Results of daily photometric measurements of the solar radius from Mt. Wilson over the past seven years are reported. Reduction of the full disk magnetograms yields a formal error of 0.1 arcsec in the boustrophedonic scans in the 5250.2 A FeI line. 150 scan lines comprise each observation; 1,412 observations were made from 1974-1981. Measurement procedures, determination of the scattered light of the optics and the atmosphere, and error calculations are described, noting that days of poor atmospheric visibility are omitted from the data. The horizontal diameter of the sun remains visually fixed while the vertical component changes due to atmospheric diffraction; error accounting for thermal effects, telescope aberrations, and instrument calibration are discussed, and results, within instrument accuracy, indicate no change in the solar radius over the last seven years.

  1. The DiskMass Survey. II. Error Budget

    NASA Astrophysics Data System (ADS)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  2. Numerical analysis of the pressure drop across highly-eccentric coronary stenoses: application to the calculation of the fractional flow reserve.

    PubMed

    Agujetas, R; González-Fernández, M R; Nogales-Asensio, J M; Montanero, J M

    2018-05-30

    Fractional flow reverse (FFR) is the gold standard assessment of the hemodynamic significance of coronary stenoses. However, it requires the catheterization of the coronary artery to determine the pressure waveforms proximal and distal to the stenosis. On the contrary, computational fluid dynamics enables the calculation of the FFR value from relatively non-invasive computed tomography angiography (CTA). We analyze the flow across idealized highly-eccentric coronary stenoses by solving the Navier-Stokes equations. We examine the influence of several aspects (approximations) of the simulation method on the calculation of the FFR value. We study the effects on the FFR value of errors made in the segmentation of clinical images. For this purpose, we compare the FFR value for the nominal geometry with that calculated for other shapes that slightly deviate from that geometry. This analysis is conducted for a range of stenosis severities and different inlet velocity and pressure waveforms. The errors made in assuming a uniform velocity profile in front of the stenosis, as well as those due to the Newtonian and laminar approximations, are negligible for stenosis severities leading to FFR values around the threshold 0.8. The limited resolution of the stenosis geometry reconstruction is the major source of error when predicting the FFR value. Both systematic errors in the contour detection of just 1-pixel size in the CTA images and a low-quality representation of the stenosis surface (coarse faceted geometry) may yield wrong outcomes of the FFR assessment for an important set of eccentric stenoses. On the contrary, the spatial resolution of images acquired with optical coherence tomography may be sufficient to ensure accurate predictions for the FFR value.

  3. Performance and structure of single-mode bosonic codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    2018-03-01

    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.

  4. Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2017-10-01

    Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.

  5. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Planetary Transmission Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.

    2004-01-01

    This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.

  7. Cluster-Continuum Calculations of Hydration Free Energies of Anions and Group 12 Divalent Cations.

    PubMed

    Riccardi, Demian; Guo, Hao-Bo; Parks, Jerry M; Gu, Baohua; Liang, Liyuan; Smith, Jeremy C

    2013-01-08

    Understanding aqueous phase processes involving group 12 metal cations is relevant to both environmental and biological sciences. Here, quantum chemical methods and polarizable continuum models are used to compute the hydration free energies of a series of divalent group 12 metal cations (Zn(2+), Cd(2+), and Hg(2+)) together with Cu(2+) and the anions OH(-), SH(-), Cl(-), and F(-). A cluster-continuum method is employed, in which gas-phase clusters of the ion and explicit solvent molecules are immersed in a dielectric continuum. Two approaches to define the size of the solute-water cluster are compared, in which the number of explicit waters used is either held constant or determined variationally as that of the most favorable hydration free energy. Results obtained with various polarizable continuum models are also presented. Each leg of the relevant thermodynamic cycle is analyzed in detail to determine how different terms contribute to the observed mean signed error (MSE) and the standard deviation of the error (STDEV) between theory and experiment. The use of a constant number of water molecules for each set of ions is found to lead to predicted relative trends that benefit from error cancellation. Overall, the best results are obtained with MP2 and the Solvent Model D polarizable continuum model (SMD), with eight explicit water molecules for anions and 10 for the metal cations, yielding a STDEV of 2.3 kcal mol(-1) and MSE of 0.9 kcal mol(-1) between theoretical and experimental hydration free energies, which range from -72.4 kcal mol(-1) for SH(-) to -505.9 kcal mol(-1) for Cu(2+). Using B3PW91 with DFT-D3 dispersion corrections (B3PW91-D) and SMD yields a STDEV of 3.3 kcal mol(-1) and MSE of 1.6 kcal mol(-1), to which adding MP2 corrections from smaller divalent metal cation water molecule clusters yields very good agreement with the full MP2 results. Using B3PW91-D and SMD, with two explicit water molecules for anions and six for divalent metal cations, also yields reasonable agreement with experimental values, due in part to fortuitous error cancellation associated with the metal cations. Overall, the results indicate that the careful application of quantum chemical cluster-continuum methods provides valuable insight into aqueous ionic processes that depend on both local and long-range electrostatic interactions with the solvent.

  8. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  9. Greenhouse gas emissions from fen soils used for forage production in northern Germany

    NASA Astrophysics Data System (ADS)

    Poyda, Arne; Reinsch, Thorsten; Kluß, Christof; Loges, Ralf; Taube, Friedhelm

    2016-09-01

    A large share of peatlands in northwestern Germany is drained for agricultural purposes, thereby emitting high amounts of greenhouse gases (GHGs). In order to quantify the climatic impact of fen soils in dairy farming systems of northern Germany, GHG exchange and forage yield were determined on four experimental sites which differed in terms of management and drainage intensity: (a) rewetted and unutilized grassland (UG), (b) intensive and wet grassland (GW), (c) intensive and moist grassland (GM) and (d) arable forage cropping (AR). Net ecosystem exchange (NEE) of CO2 and fluxes of CH4 and N2O were measured using closed manual chambers. CH4 fluxes were significantly affected by groundwater level (GWL) and soil temperature, whereas N2O fluxes showed a significant relation to the amount of nitrate in top soil. Annual balances of all three gases, as well as the global warming potential (GWP), were significantly correlated to mean annual GWL. A 2-year mean GWP, combined from CO2-C eq. of NEE, CH4 and N2O emissions, as well as C input (slurry) and C output (harvest), was 3.8, 11.7, 17.7 and 17.3 Mg CO2-C eq. ha-1 a-1 for sites UG, GW, GM and AR, respectively (standard error (SE) 2.8, 1.2, 1.8, 2.6). Yield-related emissions for the three agricultural sites were 201, 248 and 269 kg CO2-C eq. (GJ net energy lactation; NEL)-1 for sites GW, GM and AR, respectively (SE 17, 9, 19). The carbon footprint of agricultural commodities grown on fen soils depended on long-term drainage intensity rather than type of management, but management and climate strongly influenced interannual on-site variability. However, arable forage production revealed a high uncertainty of yield and therefore was an unsuitable land use option. Lowest yield-related GHG emissions were achieved by a three-cut system of productive grassland swards in combination with a high GWL (long-term mean ≤ 20 cm below the surface).

  10. Crop yield monitoring in the Sahel using root zone soil moisture anomalies derived from SMOS soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Gibon, François; Pellarin, Thierry; Alhassane, Agali; Traoré, Seydou; Baron, Christian

    2017-04-01

    West Africa is greatly vulnerable, especially in terms of food sustainability. Mainly based on rainfed agriculture, the high variability of the rainy season strongly impacts the crop production driven by the soil water availability in the soil. To monitor this water availability, classical methods are based on daily precipitation measurements. However, the raingauge network suffers from the poor network density in Africa (1/10000km2). Alternatively, real-time satellite-derived precipitations can be used, but they are known to suffer from large uncertainties which produce significant error on crop yield estimations. The present study proposes to use root soil moisture rather than precipitation to evaluate crop yield variations. First, a local analysis of the spatiotemporal impact of water deficit on millet crop production in Niger was done, from in-situ soil moisture measurements (AMMA-CATCH/OZCAR (French Critical Zone exploration network)) and in-situ millet yield survey. Crop yield measurements were obtained for 10 villages located in the Niamey region from 2005 to 2012. The mean production (over 8 years) is 690 kg/ha, and ranges from 381 to 872 kg/ha during this period. Various statistical relationships based on soil moisture estimates were tested, and the most promising one (R>0.9) linked the 30-cm soil moisture anomalies from mid-August to mid-September (grain filling period) to the crop yield anomalies. Based on this local study, it was proposed to derive regional statistical relationships using 30-cm soil moisture maps over West Africa. The selected approach was to use a simple hydrological model, the Antecedent Precipitation Index (API), forced by real-time satellite-based precipitation (CMORPH, PERSIANN, TRMM3B42). To reduce uncertainties related to the quality of real-time rainfall satellite products, SMOS soil moisture measurements were assimilated into the API model through a Particular Filter algorithm. Then, obtained soil moisture anomalies were compared to 17 years of crop yield estimates from the FAOSTAT database (1998-2014). Results showed that the 30-cm soil moisture anomalies explained 89% of the crop yield variation in Niger, 72% in Burkina Faso, 82% in Mali and 84% in Senegal.

  11. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  12. Development of estimation method for crop yield using MODIS satellite imagery data and process-based model for corn and soybean in US Corn-Belt region

    NASA Astrophysics Data System (ADS)

    Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.

    2012-12-01

    Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY. For the case of 280 DOY, Crop yield estimation showed better accuracy for soybean at county level. Though the case of 200 DOY resulted in less accuracy (i.e. 20% mean bias), it provides a useful tool for early forecasting of crop yield. We improved the spatial accuracy of estimated crop yield at county level by developing county-specific crop conversion coefficient. Our results indicate that the aboveground crop biomass can be estimated successfully with the simple LUE and respiration models combined with MODIS data and then, county-specific conversion coefficient can be different with each other across different counties. Hence, applying region-specific conversion coefficient is necessary to estimate crop yield with better accuracy.

  13. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  14. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  15. HIGH-FIDELITY RADIO ASTRONOMICAL POLARIMETRY USING A MILLISECOND PULSAR AS A POLARIZED REFERENCE SOURCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Straten, W., E-mail: vanstraten.willem@gmail.com

    2013-01-15

    A new method of polarimetric calibration is presented in which the instrumental response is derived from regular observations of PSR J0437-4715 based on the assumption that the mean polarized emission from this millisecond pulsar remains constant over time. The technique is applicable to any experiment in which high-fidelity polarimetry is required over long timescales; it is demonstrated by calibrating 7.2 years of high-precision timing observations of PSR J1022+1001 made at the Parkes Observatory. Application of the new technique followed by arrival time estimation using matrix template matching yields post-fit residuals with an uncertainty-weighted standard deviation of 880 ns, two timesmore » smaller than that of arrival time residuals obtained via conventional methods of calibration and arrival time estimation. The precision achieved by this experiment yields the first significant measurements of the secular variation of the projected semimajor axis, the precession of periastron, and the Shapiro delay; it also places PSR J1022+1001 among the 10 best pulsars regularly observed as part of the Parkes Pulsar Timing Array (PPTA) project. It is shown that the timing accuracy of a large fraction of the pulsars in the PPTA is currently limited by the systematic timing error due to instrumental polarization artifacts. More importantly, long-term variations of systematic error are correlated between different pulsars, which adversely affects the primary objectives of any pulsar timing array experiment. These limitations may be overcome by adopting the techniques presented in this work, which relax the demand for instrumental polarization purity and thereby have the potential to reduce the development cost of next-generation telescopes such as the Square Kilometre Array.« less

  16. The Earth Gravitational Observatory (EGO): Nanosat Constellations For Advanced Gravity Mapping

    NASA Astrophysics Data System (ADS)

    Yunck, T.; Saltman, A.; Bettadpur, S. V.; Nerem, R. S.; Abel, J.

    2017-12-01

    The trend to nanosats for space-based remote sensing is transforming system architectures: fleets of "cellular" craft scanning Earth with exceptional precision and economy. GeoOptics Inc has been selected by NASA to develop a vision for that transition with an initial focus on advanced gravity field mapping. Building on our spaceborne GNSS technology we introduce innovations that will improve gravity mapping roughly tenfold over previous missions at a fraction of the cost. The power of EGO is realized in its N-satellite form where all satellites in a cluster receive dual-frequency crosslinks from all other satellites, yielding N(N-1)/2 independent measurements. Twelve "cells" thus yield 66 independent links. Because the cells form a 2D arc with spacings ranging from 200 km to 3,000 km, EGO senses a wider range of gravity wavelengths and offers greater geometrical observing strength. The benefits are two-fold: Improved time resolution enables observation of sub-seasonal processes, as from hydro-meteorological phenomena; improved measurement quality enhances all gravity solutions. For the GRACE mission, key limitations arise from such spacecraft factors as long-term accelerometer error, attitude knowledge and thermal stability, which are largely independent from cell to cell. Data from a dozen cells reduces their impact by 3x, by the "root-n" averaging effect. Multi-cell closures improve on this further. The many closure paths among 12 cells provide strong constraints to correct for observed range changes not compatible with a gravity source, including accelerometer errors in measuring non-conservative forces. Perhaps more significantly from a science standpoint, system-level estimates with data from diverse orbits can attack the many scientifically limiting sources of temporal aliasing.

  17. Influence of the nanoparticles agglomeration state in the quantum-confinement effects: Experimental evidences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorite, I., E-mail: lorite@physik.uni-leipzig.de; Division of Superconductivity and Magnetism, Faculty of Physics and Earth Sciences, Linnestrasse 5, D-04103 Leipzig; Romero, J. J.

    2015-03-15

    The agglomeration state facilitates particle-particle interaction which produces important effects in the phonon confinement effects at the nanoscale. A partial phonon transmission between close nanoparticles yields a lower momentum conservation relaxation than in a single isolated nanoparticle. It means a larger red shift and broadening of the Raman modes than the expected ones for Raman quantum confinement effects. This particle-particle interaction can drive to error when Raman responses are used to estimate the size of the nanoscaled materials. In this work different corrections are suggested to overtake this source of error.

  18. Error measure comparison of currently employed dose-modulation schemes for e-beam proximity effect control

    NASA Astrophysics Data System (ADS)

    Peckerar, Martin C.; Marrian, Christie R.

    1995-05-01

    Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.

  19. A simulation test of the effectiveness of several methods for error-checking non-invasive genetic data

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2005-01-01

    Non-invasive genetic sampling (NGS) is becoming a popular tool for population estimation. However, multiple NGS studies have demonstrated that polymerase chain reaction (PCR) genotyping errors can bias demographic estimates. These errors can be detected by comprehensive data filters such as the multiple-tubes approach, but this approach is expensive and time consuming as it requires three to eight PCR replicates per locus. Thus, researchers have attempted to correct PCR errors in NGS datasets using non-comprehensive error checking methods, but these approaches have not been evaluated for reliability. We simulated NGS studies with and without PCR error and 'filtered' datasets using non-comprehensive approaches derived from published studies and calculated mark-recapture estimates using CAPTURE. In the absence of data-filtering, simulated error resulted in serious inflations in CAPTURE estimates; some estimates exceeded N by ??? 200%. When data filters were used, CAPTURE estimate reliability varied with per-locus error (E??). At E?? = 0.01, CAPTURE estimates from filtered data displayed < 5% deviance from error-free estimates. When E?? was 0.05 or 0.09, some CAPTURE estimates from filtered data displayed biases in excess of 10%. Biases were positive at high sampling intensities; negative biases were observed at low sampling intensities. We caution researchers against using non-comprehensive data filters in NGS studies, unless they can achieve baseline per-locus error rates below 0.05 and, ideally, near 0.01. However, we suggest that data filters can be combined with careful technique and thoughtful NGS study design to yield accurate demographic information. ?? 2005 The Zoological Society of London.

  20. Dissociable Genetic Contributions to Error Processing: A Multimodal Neuroimaging Study

    PubMed Central

    Agam, Yigal; Vangel, Mark; Roffman, Joshua L.; Gallagher, Patience J.; Chaponis, Jonathan; Haddad, Stephen; Goff, Donald C.; Greenberg, Jennifer L.; Wilhelm, Sabine; Smoller, Jordan W.; Manoach, Dara S.

    2014-01-01

    Background Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN), an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC). While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation. Methods We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4) C-521T (rs1800955), which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR) C677T (rs1801133), which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response. Results We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant. Conclusions DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that these error markers have different neural and genetic mediation. PMID:25010186

  1. Energy balance and mass conservation in reduced order models of fluid flows

    NASA Astrophysics Data System (ADS)

    Mohebujjaman, Muhammad; Rebholz, Leo G.; Xie, Xuping; Iliescu, Traian

    2017-10-01

    In this paper, we investigate theoretically and computationally the conservation properties of reduced order models (ROMs) for fluid flows. Specifically, we investigate whether the ROMs satisfy the same (or similar) energy balance and mass conservation as those satisfied by the Navier-Stokes equations. All of our theoretical findings are illustrated and tested in numerical simulations of a 2D flow past a circular cylinder at a Reynolds number Re = 100. First, we investigate the ROM energy balance. We show that using the snapshot average for the centering trajectory (which is a popular treatment of nonhomogeneous boundary conditions in ROMs) yields an incorrect energy balance. Then, we propose a new approach, in which we replace the snapshot average with the Stokes extension. Theoretically, the Stokes extension produces an accurate energy balance. Numerically, the Stokes extension yields more accurate results than the standard snapshot average, especially for longer time intervals. Our second contribution centers around ROM mass conservation. We consider ROMs created using two types of finite elements: the standard Taylor-Hood (TH) element, which satisfies the mass conservation weakly, and the Scott-Vogelius (SV) element, which satisfies the mass conservation pointwise. Theoretically, the error estimates for the SV-ROM are sharper than those for the TH-ROM. Numerically, the SV-ROM yields significantly more accurate results, especially for coarser meshes and longer time intervals.

  2. Are false-positive rates leading to an overestimation of noise-induced hearing loss?

    PubMed

    Schlauch, Robert S; Carney, Edward

    2011-04-01

    To estimate false-positive rates for rules proposed to identify early noise-induced hearing loss (NIHL) using the presence of notches in audiograms. Audiograms collected from school-age children in a national survey of health and nutrition (the Third National Health and Nutrition Examination Survey [NHANES III]; National Center for Health Statistics, 1994) were examined using published rules for identifying noise notches at various pass-fail criteria. These results were compared with computer-simulated "flat" audiograms. The proportion of these identified as having a noise notch is an estimate of the false-positive rate for a particular rule. Audiograms from the NHANES III for children 6-11 years of age yielded notched audiograms at rates consistent with simulations, suggesting that this group does not have significant NIHL. Further, pass-fail criteria for rules suggested by expert clinicians, applied to NHANES III audiometric data, yielded unacceptably high false-positive rates. Computer simulations provide an effective method for estimating false-positive rates for protocols used to identify notched audiograms. Audiometric precision could possibly be improved by (a) eliminating systematic calibration errors, including a possible problem with reference levels for TDH-style earphones; (b) repeating and averaging threshold measurements; and (c) using earphones that yield lower variability for 6.0 and 8.0 kHz--2 frequencies critical for identifying noise notches.

  3. Final Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josef Michl

    2011-10-31

    In this project we have established guidelines for the design on organic chromophores suitable for producing high triplet yields via singlet fission. We have proven their utility by identifying a chromophore of a structural class that had never been examined for singlet fission before, 1,3-diphenylisobenzofuran, and demonstrating in two independent ways that a thin layer of this material produces a triplet yield of 200% within experimental error. We have also designed a second chromophore of a very different type, again of a structural class that had not been examined for singlet fission before, and found that in a thin layermore » it produces a 70% triplet yield. Finally, we have enhanced the theoretical understanding of the quantum mechanical nature of the singlet fission process.« less

  4. Experimental study of fusion neutron and proton yields produced by petawatt-laser-irradiated D₂-³He or CD₄-³He clustering gases.

    PubMed

    Bang, W; Barbui, M; Bonasera, A; Quevedo, H J; Dyer, G; Bernstein, A C; Hagel, K; Schmidt, K; Gaul, E; Donovan, M E; Consoli, F; De Angelis, R; Andreoli, P; Barbarino, M; Kimura, S; Mazzocco, M; Natowitz, J B; Ditmire, T

    2013-09-01

    We report on experiments in which the Texas Petawatt laser irradiated a mixture of deuterium or deuterated methane clusters and helium-3 gas, generating three types of nuclear fusion reactions: D(d,^{3}He)n, D(d,t)p, and ^{3}He(d,p)^{4}He. We measured the yields of fusion neutrons and protons from these reactions and found them to agree with yields based on a simple cylindrical plasma model using known cross sections and measured plasma parameters. Within our measurement errors, the fusion products were isotropically distributed. Plasma temperatures, important for the cross sections, were determined by two independent methods: (1) deuterium ion time of flight and (2) utilizing the ratio of neutron yield to proton yield from D(d,^{3}He)n and ^{3}He(d,p)^{4}He reactions, respectively. This experiment produced the highest ion temperature ever achieved with laser-irradiated deuterium clusters.

  5. Measurement of CP observables in B{sup {+-}{yields}D}{sub CP}K{sup {+-}}decays and constraints on the CKM angle {gamma}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amo Sanchez, P. del; Lees, J. P.; Poireau, V.

    Using the entire sample of 467x10{sup 6} {Upsilon}(4S){yields}BB decays collected with the BABAR detector at the PEP-II asymmetric-energy B factory at the SLAC National Accelerator Laboratory, we perform an analysis of B{sup {+-}}{yields}DK{sup {+-}}decays, using decay modes in which the neutral D meson decays to either CP-eigenstates or non-CP-eigenstates. We measure the partial decay rate charge asymmetries for CP-even and CP-odd D final states to be A{sub CP+}=0.25{+-}0.06{+-}0.02 and A{sub CP-}=-0.09{+-}0.07{+-}0.02, respectively, where the first error is the statistical and the second is the systematic uncertainty. The parameter A{sub CP+} is different from zero with a significance of 3.6 standardmore » deviations, constituting evidence for direct CP violation. We also measure the ratios of the charged-averaged B partial decay rates in CP and non-CP decays, R{sub CP+}=1.18{+-}0.09{+-}0.05 and R{sub CP-}=1.07{+-}0.08{+-}0.04. We infer frequentist confidence intervals for the angle {gamma} of the unitarity triangle, for the strong phase difference {delta}{sub B}, and for the amplitude ratio r{sub B}, which are related to the B{sup -}{yields}DK{sup -} decay amplitude by r{sub B}e{sup i({delta}{sub B}-{gamma})}=A(B{sup -}{yields}D{sup 0}K{sup -})/A(B{sup -}{yields}D{sup 0}K{sup -}). Including statistical and systematic uncertainties, we obtain 0.24

  6. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  7. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.

    PubMed

    Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C

    2017-02-15

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that are modulated by the brain chemical dopamine are sensitive to reward variability. Here, we aimed to directly relate dopamine to learning about variable rewards, and the neural encoding of associated teaching signals. We perturbed dopamine in healthy individuals using dopaminergic medication and asked them to predict variable rewards while we made brain scans. Dopamine perturbations impaired learning and the neural encoding of reward variability, thus establishing a direct link between dopamine and adaptation to reward variability. These results aid our understanding of clinical conditions associated with dopaminergic dysfunction, such as psychosis. Copyright © 2017 Diederen et al.

  8. Bar Code Medication Administration Technology: Characterization of High-Alert Medication Triggers and Clinician Workarounds.

    PubMed

    Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L

    2011-02-01

    Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes must be continually analyzed and restructured to yield the intended full benefits of BCMA technology. © 2011 SAGE Publications.

  9. Super-global distortion correction for a rotational C-arm x-ray image intensifier.

    PubMed

    Liu, R R; Rudin, S; Bednarek, D R

    1999-09-01

    Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.

  10. Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty

    NASA Astrophysics Data System (ADS)

    Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team

    2017-11-01

    A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.

  11. Fine-Scale Population Estimation by 3D Reconstruction of Urban Residential Buildings

    PubMed Central

    Wang, Shixin; Tian, Ye; Zhou, Yi; Liu, Wenliang; Lin, Chenxi

    2016-01-01

    Fine-scale population estimation is essential in emergency response and epidemiological applications as well as urban planning and management. However, representing populations in heterogeneous urban regions with a finer resolution is a challenge. This study aims to obtain fine-scale population distribution based on 3D reconstruction of urban residential buildings with morphological operations using optical high-resolution (HR) images from the Chinese No. 3 Resources Satellite (ZY-3). Specifically, the research area was first divided into three categories when dasymetric mapping was taken into consideration. The results demonstrate that the morphological building index (MBI) yielded better results than built-up presence index (PanTex) in building detection, and the morphological shadow index (MSI) outperformed color invariant indices (CIIT) in shadow extraction and height retrieval. Building extraction and height retrieval were then combined to reconstruct 3D models and to estimate population. Final results show that this approach is effective in fine-scale population estimation, with a mean relative error of 16.46% and an overall Relative Total Absolute Error (RATE) of 0.158. This study gives significant insights into fine-scale population estimation in complicated urban landscapes, when detailed 3D information of buildings is unavailable. PMID:27775670

  12. Deviations from Vegard's law in semiconductor thin films measured with X-ray diffraction and Rutherford backscattering: The Ge1-ySny and Ge1-xSix cases

    NASA Astrophysics Data System (ADS)

    Xu, Chi; Senaratne, Charutha L.; Culbertson, Robert J.; Kouvetakis, John; Menéndez, José

    2017-09-01

    The compositional dependence of the lattice parameter in Ge1-ySny alloys has been determined from combined X-ray diffraction and Rutherford Backscattering (RBS) measurements of a large set of epitaxial films with compositions in the 0 < y < 0.14 range. In view of contradictory prior results, a critical analysis of this method has been carried out, with emphasis on nonlinear elasticity corrections and systematic errors in popular RBS simulation codes. The approach followed is validated by showing that measurements of Ge1-xSix films yield a bowing parameter θGeSi =-0.0253(30) Å, in excellent agreement with the classic work by Dismukes. When the same methodology is applied to Ge1-ySny alloy films, it is found that the bowing parameter θGeSn is zero within experimental error, so that the system follows Vegard's law. This is in qualitative agreement with ab initio theory, but the value of the experimental bowing parameter is significantly smaller than the theoretical prediction. Possible reasons for this discrepancy are discussed in detail.

  13. Solar Irradiance from GOES Albedo performance in a Hydrologic Model Simulation of Snowmelt Runoff

    NASA Astrophysics Data System (ADS)

    Sumargo, E.; Cayan, D. R.; McGurk, B. J.

    2015-12-01

    In many hydrologic modeling applications, solar radiation has been parameterized using commonly available measures, such as the daily temperature range, due to scarce in situ solar radiation measurement network. However, these parameterized estimates often produce significant biases. Here we test hourly solar irradiance derived from the Geostationary Operational Environmental Satellite (GOES) visible albedo product, using several established algorithms. Focusing on the Sierra Nevada and White Mountain in California, we compared the GOES irradiance and that from a traditional temperature-based algorithm with incoming irradiance from pyranometers at 19 stations. The GOES based estimates yielded 21-27% reduction in root-mean-squared error (average over 19 sites). The derived irradiance is then prescribed as an input to Precipitation-Runoff Modeling System (PRMS). We constrain our experiment to the Tuolumne River watershed and focus our attention on the winter and spring of 1996-2014. A root-mean-squared error reduction of 2-6% in daily inflow to Hetch Hetchy at the lower end of the Tuolumne catchment was achieved by incorporating the insolation estimates at only 8 out of 280 Hydrologic Response Units (HRUs) within the basin. Our ongoing work endeavors to apply satellite-derived irradiance at each individual HRU.

  14. Hadronic light-by-light scattering contribution to the muon anomalous magnetic moment from lattice QCD.

    PubMed

    Blum, Thomas; Chowdhury, Saumitra; Hayakawa, Masashi; Izubuchi, Taku

    2015-01-09

    The most compelling possibility for a new law of nature beyond the four fundamental forces comprising the standard model of high-energy physics is the discrepancy between measurements and calculations of the muon anomalous magnetic moment. Until now a key part of the calculation, the hadronic light-by-light contribution, has only been accessible from models of QCD, the quantum description of the strong force, whose accuracy at the required level may be questioned. A first principles calculation with systematically improvable errors is needed, along with the upcoming experiments, to decisively settle the matter. For the first time, the form factor that yields the light-by-light scattering contribution to the muon anomalous magnetic moment is computed in such a framework, lattice QCD+QED and QED. A nonperturbative treatment of QED is used and checked against perturbation theory. The hadronic contribution is calculated for unphysical quark and muon masses, and only the diagram with a single quark loop is computed for which statistically significant signals are obtained. Initial results are promising, and the prospect for a complete calculation with physical masses and controlled errors is discussed.

  15. Evaluation of the microsoft kinect skeletal versus depth data analysis for timed-up and go and figure of 8 walk tests.

    PubMed

    Hotrabhavananda, Benjamin; Mishra, Anup K; Skubic, Marjorie; Hotrabhavananda, Nijaporn; Abbott, Carmen

    2016-08-01

    We compared the performance of the Kinect skeletal data with the Kinect depth data in capturing different gait parameters during the Timed-up and Go Test (TUG) and Figure of 8 Walk Test (F8W). The gait parameters considered were stride length, stride time, and walking speed for the TUG, and number of steps and completion time for the F8W. A marker-based Vicon motion capture system was used for the ground-truth measurements. Five healthy participants were recruited for the experiment and were asked to perform three trials of each task. Results show that depth data analysis yields stride length and stride time measures with significantly low percentile errors as compared to the skeletal data analysis. However, the skeletal and depth data performed similar with less than 3% of absolute mean percentile error in determining the walking speed for the TUG and both parameters of F8W. The results show potential capabilities of Kinect depth data analysis in computing many gait parameters, whereas, the Kinect skeletal data can also be used for walking speed in TUG and F8W gait parameters.

  16. Relating Regime Structure to Probability Distribution and Preferred Structure of Small Errors in a Large Atmospheric GCM

    NASA Astrophysics Data System (ADS)

    Straus, D. M.

    2007-12-01

    The probability distribution (pdf) of errors is followed in identical twin studies using the COLA T63 AGCM, integrated with observed SST for 15 recent winters. 30 integrations per winter (for 15 winters) are available with initial errors that are extremely small. The evolution of the pdf is tested for multi-modality, and the results interpreted in terms of clusters / regimes found in: (a) the set of 15x30 integrations mentioned, and (b) a larger ensemble of 55x15 integrations made with the same GCM using the same SSTs. The mapping of pdf evolution and clusters is also carried out for each winter separately, using the clusters found in the 55-member ensemble for the same winter alone. This technique yields information on the change in regimes caused by different boundary forcing (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). Analysis of the growing errors in terms of baroclinic and barotropic components allows for interpretation of the corresponding instabilities.

  17. Bathymetric surveying with GPS and heave, pitch, and roll compensation

    USGS Publications Warehouse

    Work, P.A.; Hansen, M.; Rogers, W.E.

    1998-01-01

    Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.

  18. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  19. Beyond crisis resource management: new frontiers in human factors training for acute care medicine.

    PubMed

    Petrosoniak, Andrew; Hicks, Christopher M

    2013-12-01

    Error is ubiquitous in medicine, particularly during critical events and resuscitation. A significant proportion of adverse events can be attributed to inadequate team-based skills such as communication, leadership, situation awareness and resource utilization. Aviation-based crisis resource management (CRM) training using high-fidelity simulation has been proposed as a strategy to improve team behaviours. This review will address key considerations in CRM training and outline recommendations for the future of human factors education in healthcare. A critical examination of the current literature yields several important considerations to guide the development and implementation of effective simulation-based CRM training. These include defining a priori domain-specific objectives, creating an immersive environment that encourages deliberate practice and transfer-appropriate processing, and the importance of effective team debriefing. Building on research from high-risk industry, we suggest that traditional CRM training may be augmented with new training techniques that promote the development of shared mental models for team and task processes, address the effect of acute stress on team performance, and integrate strategies to improve clinical reasoning and the detection of cognitive errors. The evolution of CRM training involves a 'Triple Threat' approach that integrates mental model theory for team and task processes, training for stressful situations and metacognition and error theory towards a more comprehensive training paradigm, with roots in high-risk industry and cognitive psychology. Further research is required to evaluate the impact of this approach on patient-oriented outcomes.

  20. On Statistical Analysis of Neuroimages with Imperfect Registration

    PubMed Central

    Kim, Won Hwa; Ravi, Sathya N.; Johnson, Sterling C.; Okonkwo, Ozioma C.; Singh, Vikas

    2016-01-01

    A variety of studies in neuroscience/neuroimaging seek to perform statistical inference on the acquired brain image scans for diagnosis as well as understanding the pathological manifestation of diseases. To do so, an important first step is to register (or co-register) all of the image data into a common coordinate system. This permits meaningful comparison of the intensities at each voxel across groups (e.g., diseased versus healthy) to evaluate the effects of the disease and/or use machine learning algorithms in a subsequent step. But errors in the underlying registration make this problematic, they either decrease the statistical power or make the follow-up inference tasks less effective/accurate. In this paper, we derive a novel algorithm which offers immunity to local errors in the underlying deformation field obtained from registration procedures. By deriving a deformation invariant representation of the image, the downstream analysis can be made more robust as if one had access to a (hypothetical) far superior registration procedure. Our algorithm is based on recent work on scattering transform. Using this as a starting point, we show how results from harmonic analysis (especially, non-Euclidean wavelets) yields strategies for designing deformation and additive noise invariant representations of large 3-D brain image volumes. We present a set of results on synthetic and real brain images where we achieve robust statistical analysis even in the presence of substantial deformation errors; here, standard analysis procedures significantly under-perform and fail to identify the true signal. PMID:27042168

  1. HYDROLOGIC MODEL CALIBRATION AND UNCERTAINTY IN SCENARIO ANALYSIS

    EPA Science Inventory

    A systematic analysis of model performance during simulations based on

    observed land-cover/use change is used to quantify error associated with water-yield

    simulations for a series of known landscape conditions over a 24-year period with the

    goal of evaluatin...

  2. Written Corrective Feedback and Peer Review in the BYOD Classroom

    ERIC Educational Resources Information Center

    Ferreira, Daniel

    2013-01-01

    Error correction in the English as a Foreign Language (EFL) writing curriculum is a practice both teachers and students agree is important for writing proficiency development (Ferris, 2004; Van Beuningen, De Jong, & Kuiken, 2012; Vyatkina, 2010, 2011). Research suggests student dependency on teacher corrective feedback yields few long-term…

  3. Control and automation of multilayered integrated microfluidic device fabrication.

    PubMed

    Kipper, Sarit; Frolov, Ludmila; Guy, Ortal; Pellach, Michal; Glick, Yair; Malichi, Asaf; Knisbacher, Binyamin A; Barbiro-Michaely, Efrat; Avrahami, Dorit; Yavets-Chen, Yehuda; Levanon, Erez Y; Gerber, Doron

    2017-01-31

    Integrated microfluidics is a sophisticated three-dimensional (multi layer) solution for high complexity serial or parallel processes. Fabrication of integrated microfluidic devices requires soft lithography and the stacking of thin-patterned PDMS layers. Precise layer alignment and bonding is crucial. There are no previously reported standards for alignment of the layers, which is mostly performed using uncontrolled processes with very low alignment success. As a result, integrated microfluidics is mostly used in academia rather than in the many potential industrial applications. We have designed and manufactured a semiautomatic Microfluidic Device Assembly System (μDAS) for full device production. μDAS comprises an electrooptic mechanical system consisting of four main parts: optical system, smart media holder (for PDMS), a micropositioning xyzθ system and a macropositioning XY mechanism. The use of the μDAS yielded valuable information regarding PDMS as the material for device fabrication, revealed previously unidentified errors, and enabled optimization of a robust fabrication process. In addition, we have demonstrated the utilization of the μDAS technology for fabrication of a complex 3 layered device with over 12 000 micromechanical valves and an array of 64 × 64 DNA spots on a glass substrate with high yield and high accuracy. We increased fabrication yield from 25% to about 85% with an average layer alignment error of just ∼4 μm. It also increased our protein expression yields from 80% to over 90%, allowing us to investigate more proteins per experiment. The μDAS has great potential to become a valuable tool for both advancing integrated microfluidics in academia and producing and applying microfluidic devices in the industry.

  4. Accounting for the decrease of photosystem photochemical efficiency with increasing irradiance to estimate quantum yield of leaf photosynthesis.

    PubMed

    Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C

    2014-12-01

    Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.

  5. Building a better methane generation model: Validating models with methane recovery rates from 35 Canadian landfills.

    PubMed

    Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E

    2009-07-01

    The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.

  6. Wafer-level colinearity monitoring for TFH applications

    NASA Astrophysics Data System (ADS)

    Moore, Patrick; Newman, Gary; Abreau, Kelly J.

    2000-06-01

    Advances in thin film head (TFH) designs continue to outpace those in the IC industry. The transition to giant magneto resistive (GMR) designs is underway along with the push toward areal densities in the 20 Gbit/inch2 regime and beyond. This comes at a time when the popularity of the low-cost personal computer (PC) is extremely high, and PC prices are continuing to fall. Consequently, TFH manufacturers are forced to deal with pricing pressure in addition to technological demands. New methods of monitoring and improving yield are required along with advanced head designs. TFH manufacturing is a two-step process. The first is a wafer-level process consisting of manufacturing devices on substrates using processes similar to those in the IC industry. The second half is a slider-level process where wafers are diced into 'rowbars' containing many heads. Each rowbar is then lapped to obtain the desired performance from each head. Variation in the placement of specific layers of each device on the bar, known as a colinearity error, causes a change in device performance and directly impacts yield. The photolithography tool and process contribute to colinearity errors. These components include stepper lens distortion errors, stepper stage errors, reticle fabrication errors, and CD uniformity errors. Currently, colinearity is only very roughly estimated during wafer-level TFH production. An absolute metrology tool, such as a Nikon XY, could be used to quantify colinearity with improved accuracy, but this technique is impractical since TFH manufacturers typically do not have this type of equipment at the production site. More importantly, this measurement technique does not provide the rapid feedback needed in a high-volume production facility. Consequently, the wafer-fab must rely on resistivity-based measurements from slider-fab to quantify colinearity errors. The feedback of this data may require several weeks, making it useless as a process diagnostic. This study examines a method of quickly estimating colinearity at the wafer-level with a test reticle and metrology equipment routinely found in TFH facilities. Colinearity results are correlated to slider-fab measurements on production devices. Stepper contributions to colinearity are estimated, and compared across multiple steppers and stepper generations. Multiple techniques of integrating this diagnostic into production are investigated and discussed.

  7. Correlations between Preoperative Angle Parameters and Postoperative Unpredicted Refractive Errors after Cataract Surgery in Open Angle Glaucoma (AOD 500).

    PubMed

    Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun; Seong, Gong Je

    2017-03-01

    To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R²=0.404). Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors.

  8. Correlations between Preoperative Angle Parameters and Postoperative Unpredicted Refractive Errors after Cataract Surgery in Open Angle Glaucoma (AOD 500)

    PubMed Central

    Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun

    2017-01-01

    Purpose To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. Materials and Methods This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. Results In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R2=0.404). Conclusion Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors. PMID:28120576

  9. Partial uniparental isodisomy of chromosome 16 unmasks a deleterious biallelic mutation in IFT140 that causes Mainzer-Saldino syndrome.

    PubMed

    Helm, Benjamin M; Willer, Jason R; Sadeghpour, Azita; Golzio, Christelle; Crouch, Eric; Vergano, Samantha Schrier; Katsanis, Nicholas; Davis, Erica E

    2017-07-19

    The ciliopathies represent an umbrella group of >50 clinical entities that share both clinical features and molecular etiology underscored by structural and functional defects of the primary cilium. Despite the advances in gene discovery, this group of entities continues to pose a diagnostic challenge, in part due to significant genetic and phenotypic heterogeneity and variability. We consulted a pediatric case from asymptomatic, non-consanguineous parents who presented as a suspected ciliopathy due to a constellation of retinal, renal, and skeletal findings. Although clinical panel sequencing of genes implicated in nephrotic syndromes yielded no likely causal mutation, an oligo-SNP microarray identified a ~20-Mb region of homozygosity, with no altered gene dosage, on chromosome 16p13. Intersection of the proband's phenotypes with known disease genes within the homozygous region yielded a single candidate, IFT140, encoding a retrograde intraflagellar transport protein implicated previously in several ciliopathies, including the phenotypically overlapping Mainzer-Saldino syndrome (MZSDS). Sanger sequencing yielded a maternally inherited homozygous c.634G>A; p.Gly212Arg mutation altering the exon 6 splice donor site. Functional studies in cells from the proband showed that the locus produced two transcripts: a majority message containing a mis-splicing event that caused a premature termination codon and a minority message homozygous for the p.Gly212Arg allele. Zebrafish in vivo complementation studies of the latter transcript demonstrated a loss of function effect. Finally, we conducted post-hoc trio-based whole exome sequencing studies to (a) test the possibility of other causal loci in the proband and (b) explain the Mendelian error of segregation for the IFT140 mutation. We show that the proband harbors a chromosome 16 maternal heterodisomy, with segmental isodisomy at 16p13, likely due to a meiosis I error in the maternal gamete. Using clinical phenotyping combined with research-based genetic and functional studies, we have characterized a recurrent IFT140 mutation in the proband; together, these data are consistent with MZSDS. Additionally, we report a rare instance of a uniparental isodisomy unmasking a deleterious mutation to cause a ciliary disorder.

  10. Understanding the Nature of Measurement Error When Estimating Energy Expenditure and Physical Activity via Physical Activity Recall.

    PubMed

    Paul, David R; McGrath, Ryan; Vella, Chantal A; Kramer, Matthew; Baer, David J; Moshfegh, Alanna J

    2018-03-26

    The National Health and Nutrition Examination Survey physical activity questionnaire (PAQ) is used to estimate activity energy expenditure (AEE) and moderate to vigorous physical activity (MVPA). Bias and variance in estimates of AEE and MVPA from the PAQ have not been described, nor the impact of measurement error when utilizing the PAQ to predict biomarkers and categorize individuals. The PAQ was administered to 385 adults to estimate AEE (AEE:PAQ) and MVPA (MVPA:PAQ), while simultaneously measuring AEE with doubly labeled water (DLW; AEE:DLW) and MVPA with an accelerometer (MVPA:A). Although AEE:PAQ [3.4 (2.2) MJ·d -1 ] was not significantly different from AEE:DLW [3.6 (1.6) MJ·d -1 ; P > .14], MVPA:PAQ [36.2 (24.4) min·d -1 ] was significantly higher than MVPA:A [8.0 (10.4) min·d -1 ; P < .0001]. AEE:PAQ regressed on AEE:DLW and MVPA:PAQ regressed on MVPA:A yielded not only significant positive relationships but also large residual variances. The relationships between AEE and MVPA, and 10 of the 12 biomarkers were underestimated by the PAQ. When compared with accelerometers, the PAQ overestimated the number of participants who met the Physical Activity Guidelines for Americans. Group-level bias in AEE:PAQ was small, but large for MVPA:PAQ. Poor within-participant estimates of AEE:PAQ and MVPA:PAQ lead to attenuated relationships with biomarkers and misclassifications of participants who met or who did not meet the Physical Activity Guidelines for Americans.

  11. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    PubMed

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  12. Parallel transmission pulse design with explicit control for the specific absorption rate in the presence of radiofrequency errors.

    PubMed

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien

    2016-06-01

    A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  13. Predicting Classifier Performance with Limited Training Data: Applications to Computer-Aided Diagnosis in Breast and Prostate Cancer

    PubMed Central

    Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant

    2015-01-01

    Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029

  14. Spine stereotactic body radiotherapy utilizing cone-beam CT image-guidance with a robotic couch: intrafraction motion analysis accounting for all six degrees of freedom.

    PubMed

    Hyde, Derek; Lochray, Fiona; Korol, Renee; Davidson, Melanie; Wong, C Shun; Ma, Lijun; Sahgal, Arjun

    2012-03-01

    To evaluate the residual setup error and intrafraction motion following kilovoltage cone-beam CT (CBCT) image guidance, for immobilized spine stereotactic body radiotherapy (SBRT) patients, with positioning corrected for in all six degrees of freedom. Analysis is based on 42 consecutive patients (48 thoracic and/or lumbar metastases) treated with a total of 106 fractions and 307 image registrations. Following initial setup, a CBCT was acquired for patient alignment and a pretreatment CBCT taken to verify shifts and determine the residual setup error, followed by a midtreatment and posttreatment CBCT image. For 13 single-fraction SBRT patients, two midtreatment CBCT images were obtained. Initially, a 1.5-mm and 1° tolerance was used to reposition the patient following couch shifts which was subsequently reduced to 1 mm and 1° degree after the first 10 patients. Small positioning errors after the initial CBCT setup were observed, with 90% occurring within 1 mm and 97% within 1°. In analyzing the impact of the time interval for verification imaging (10 ± 3 min) and subsequent image acquisitions (17 ± 4 min), the residual setup error was not significantly different (p > 0.05). A significant difference (p = 0.04) in the average three-dimensional intrafraction positional deviations favoring a more strict tolerance in translation (1 mm vs. 1.5 mm) was observed. The absolute intrafraction motion averaged over all patients and all directions along x, y, and z axis (± SD) were 0.7 ± 0.5 mm and 0.5 ± 0.4 mm for the 1.5 mm and 1 mm tolerance, respectively. Based on a 1-mm and 1° correction threshold, the target was localized to within 1.2 mm and 0.9° with 95% confidence. Near-rigid body immobilization, intrafraction CBCT imaging approximately every 15-20 min, and strict repositioning thresholds in six degrees of freedom yields minimal intrafraction motion allowing for safe spine SBRT delivery. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  16. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  17. Single Versus Multiple Events Error Potential Detection in a BCI-Controlled Car Game With Continuous and Discrete Feedback.

    PubMed

    Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R

    2016-03-01

    This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.

  18. A multiloop generalization of the circle criterion for stability margin analysis

    NASA Technical Reports Server (NTRS)

    Safonov, M. G.; Athans, M.

    1979-01-01

    In order to provide a theoretical tool suited for characterizing the stability margins of multiloop feedback systems, multiloop input-output stability results generalizing the circle stability criterion are considered. Generalized conic sectors with 'centers' and 'radii' determined by linear dynamical operators are employed to specify the stability margins as a frequency dependent convex set of modeling errors (including nonlinearities, gain variations and phase variations) which the system must be able to tolerate in each feedback loop without instability. The resulting stability criterion gives sufficient conditions for closed loop stability in the presence of frequency dependent modeling errors, even when the modeling errors occur simultaneously in all loops. The stability conditions yield an easily interpreted scalar measure of the amount by which a multiloop system exceeds, or falls short of, its stability margin specifications.

  19. Why a simulation system doesn`t match the plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, R.

    1998-03-01

    Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less

  20. Methods for determining and processing 3D errors and uncertainties for AFM data analysis

    NASA Astrophysics Data System (ADS)

    Klapetek, P.; Nečas, D.; Campbellová, A.; Yacoot, A.; Koenders, L.

    2011-02-01

    This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion.

Top