Science.gov

Sample records for absolute average error

  1. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  2. On the Error Sources in Absolute Individual Antenna Calibrations

    NASA Astrophysics Data System (ADS)

    Aerts, Wim; Baire, Quentin; Bilich, Andria; Bruyninx, Carine; Legrand, Juliette

    2013-04-01

    field) multi path errors, both during calibration and later on at the station, absolute sub-millimeter positioning with GPS is not (yet) possible. References [1] G. Wübbena, M. Schmitz, G. Boettcher, C. Schumann, "Absolute GNSS Antenna Calibration with a Robot: Repeatability of Phase Variations, Calibration of GLONASS and Determination of Carrier-to-Noise Pattern", International GNSS Service: Analysis Center workshop, 8-12 May 2006, Darmstadt, Germany. [2] P. Zeimetz, H. Kuhlmann, "On the Accuracy of Absolute GNSS Antenna Calibration and the Conception of a New Anechoic Chamber", FIG Working Week 2008, 14-19 June 2008, Stockholm, Sweden. [3] P. Zeimetz, H. Kuhlmann, L. Wanninger, V. Frevert, S. Schön and K. Strauch, "Ringversuch 2009", 7th GNSS-Antennen-Workshop, 19-20 March 2009, Dresden, Germany.

  3. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  4. On various definitions of shadowing with average error in tracing

    NASA Astrophysics Data System (ADS)

    Wu, Xinxing; Oprocha, Piotr; Chen, Guanrong

    2016-07-01

    When computing a trajectory of a dynamical system, influence of noise can lead to large perturbations which can appear, however, with small probability. Then when calculating approximate trajectories, it makes sense to consider errors small on average, since controlling them in each iteration may be impossible. Demand to relate approximate trajectories with genuine orbits leads to various notions of shadowing (on average) which we consider in the paper. As the main tools in our studies we provide a few equivalent characterizations of the average shadowing property, which also partly apply to other notions of shadowing. We prove that almost specification on the whole space induces this property on the measure center which in turn implies the average shadowing property. Finally, we study connections among sensitivity, transitivity, equicontinuity and (average) shadowing.

  5. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  6. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  7. Absolute vs. relative error characterization of electromagnetic tracking accuracy

    NASA Astrophysics Data System (ADS)

    Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet

    2010-02-01

    Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the

  8. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  9. Absolute Plate Velocities from Seismic Anisotropy: Importance of Correlated Errors

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Zheng, L.; Kreemer, C.

    2014-12-01

    The orientation of seismic anisotropy inferred beneath the interiors of plates may provide a means to estimate the motions of the plate relative to the deeper mantle. Here we analyze a global set of shear-wave splitting data to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. The errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11º Ma-1 (95% confidence limits) right-handed about 57.1ºS, 68.6ºE. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2°) differs insignificantly from that for continental lithosphere (σ=21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4°) than for continental lithosphere (σ=14.7°). Two of the slowest-moving plates, Antarctica (vRMS=4 mm a-1, σ=29°) and Eurasia (vRMS=3 mm a-1, σ=33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈5 mm a-1 to result in seismic anisotropy useful for estimating plate motion.

  10. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  11. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  12. Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles

    ERIC Educational Resources Information Center

    Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner

    2016-01-01

    This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…

  13. Oral Reading Errors of Average and Superior Reading Ability Children.

    ERIC Educational Resources Information Center

    Geoffrion, Leo David

    Oral reading samples were gathered from a group of twenty normal boys from the fourth through sixth grades. All reading errors were coded and classified using a modified version of the taxonomies of Goodman and Burke. Through cluster analysis two distinct error patterns were found. One group consisted of students whose performance was limited…

  14. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  15. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  16. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  17. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  18. Demonstrating the error budget for the Climate Absolute Radiance and Refractivity Observatory through solar irradiance measurements

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2015-09-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a testbed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  19. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  20. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  1. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  2. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kerber, A. G.; Sellers, P. J.

    1993-01-01

    Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.

  3. High-order averaging schemes with error bounds for thermodynamical properties calculations by molecular dynamics simulations.

    PubMed

    Cancès, Eric; Castella, François; Chartier, Philippe; Faou, Erwan; Le Bris, Claude; Legoll, Frédéric; Turinici, Gabriel

    2004-12-01

    We introduce high-order formulas for the computation of statistical averages based on the long-time simulation of molecular dynamics trajectories. In some cases, this allows us to significantly improve the convergence rate of time averages toward ensemble averages. We provide some numerical examples that show the efficiency of our scheme. When trajectories are approximated using symplectic integration schemes (such as velocity Verlet), we give some error bounds that allow one to fix the parameters of the computation in order to reach a given desired accuracy in the most efficient manner. PMID:15549912

  4. High-order averaging schemes with error bounds for thermodynamical properties calculations by molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Cancès, Eric; Castella, François; Chartier, Philippe; Faou, Erwan; Le Bris, Claude; Legoll, Frédéric; Turinici, Gabriel

    2004-12-01

    We introduce high-order formulas for the computation of statistical averages based on the long-time simulation of molecular dynamics trajectories. In some cases, this allows us to significantly improve the convergence rate of time averages toward ensemble averages. We provide some numerical examples that show the efficiency of our scheme. When trajectories are approximated using symplectic integration schemes (such as velocity Verlet), we give some error bounds that allow one to fix the parameters of the computation in order to reach a given desired accuracy in the most efficient manner.

  5. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  6. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  7. Signal window minimum average error algorithm for multi-phase level computer-generated holograms

    NASA Astrophysics Data System (ADS)

    El Bouz, Marwa; Heggarty, Kevin

    2000-06-01

    This paper extends the article "Signal window minimum average error algorithm for computer-generated holograms" (JOSA A 1998) to multi-phase level CGHs. We show that using the same rule for calculating the complex error diffusion weights, iterative-algorithm-like low-error signal windows can be obtained for any window shape or position (on- or off-axis) and any number of CGH phase levels. Important algorithm parameters such as amplitude normalisation level and phase freedom diffusers are described and investigated to optimize the algorithm. We show that, combined with a suitable diffuser, the algorithm makes feasible the calculation of high performance CGHs far larger than currently practical with iterative algorithms yet now realisable with modern fabrication techniques. Preliminary experimental optical reconstructions are presented.

  8. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  9. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    SciTech Connect

    Gustafson, William I.; Yu, Shaocai

    2012-10-23

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.

  10. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1980-01-01

    Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.

  11. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  12. Recovery of absolute phases for the fringe patterns of three selected wavelengths with improved anti-error capability

    NASA Astrophysics Data System (ADS)

    Long, Jiale; Xi, Jiangtao; Zhang, Jianmin; Zhu, Ming; Cheng, Wenqing; Li, Zhongwei; Shi, Yusheng

    2016-09-01

    In a recent published work, we proposed a technique to recover the absolute phase maps of fringe patterns with two selected fringe wavelengths. To achieve higher anti-error capability, the proposed method requires employing the fringe patterns with longer wavelengths; however, longer wavelength may lead to the degradation of the signal-to-noise ratio (SNR) in the surface measurement. In this paper, we propose a new approach to unwrap the phase maps from their wrapped versions based on the use of fringes with three different wavelengths which is characterized by improved anti-error capability and SNR. Therefore, while the previous method works on the two-phase maps obtained from six-step phase-shifting profilometry (PSP) (thus 12 fringe patterns are needed), the proposed technique performs very well on three-phase maps from three steps PSP, requiring only nine fringe patterns and hence more efficient. Moreover, the advantages of the two-wavelength method in simple implementation and flexibility in the use of fringe patterns are also reserved. Theoretical analysis and experiment results are presented to confirm the effectiveness of the proposed method.

  13. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  14. Average BER performance of FSO SIM-QAM systems in the presence of atmospheric turbulence and pointing errors

    NASA Astrophysics Data System (ADS)

    Djordjevic, Goran T.; Petkovic, Milica I.

    2016-04-01

    This paper presents the exact average bit error rate (BER) analysis of the free-space optical system employing subcarrier intensity modulation (SIM) with Gray-coded quadrature amplitude modulation (QAM). The intensity fluctuations of the received optical signal are caused by the path loss, atmospheric turbulence and pointing errors. The exact closed-form analytical expressions for the average BER are derived assuming the SIM-QAM with arbitrary constellation size in the presence of the Gamma-Gamma scintillation. The simple approximate average BER expressions are also provided, considering only the dominant term in the finite summations of obtained expressions. Derived expressions are reduced to the special case when optical signal transmission is affected only by the atmospheric turbulence. Numerical results are presented in order to illustrate usefulness of the derived expressions and also to give insights into the effects of different modulation, channel and receiver parameters on the average BER performance. The results show that the misalignment between the transmitter laser and receiver detector has the strong effect on the average BER value, especially in the range of the high values of the average electrical signal-to-noise ratio.

  15. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  16. A neural reward prediction error revealed by a meta-analysis of ERPs using great grand averages.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2015-01-01

    Economic approaches to decision making assume that people attach values to prospective goods and act to maximize their obtained value. Neuroeconomics strives to observe these values directly in the brain. A widely used valuation term in formal learning and decision-making models is the reward prediction error: the value of an outcome relative to its expected value. An influential theory (Holroyd & Coles, 2002) claims that an electrophysiological component, feedback related negativity (FRN), codes a reward prediction error in the human brain. Such a component should be sensitive to both the prior likelihood of reward and its magnitude on receipt. A number of studies have found the FRN to be insensitive to reward magnitude, thus questioning the Holroyd and Coles account. However, because of marked inconsistencies in how the FRN is measured, a meaningful synthesis of this evidence is highly problematic. We conducted a meta-analysis of the FRN's response to both reward magnitude and likelihood using a novel method in which published effect sizes were disregarded in favor of direct measurement of the published waveforms themselves, with these waveforms then averaged to produce "great grand averages." Under this standardized measure, the meta-analysis revealed strong effects of magnitude and likelihood on the FRN, consistent with it encoding a reward prediction error. In addition, it revealed strong main effects of reward magnitude and likelihood across much of the waveform, indicating sensitivity to unsigned prediction errors or "salience." The great grand average technique is proposed as a general method for meta-analysis of event-related potential (ERP). PMID:25495239

  17. A neural reward prediction error revealed by a meta-analysis of ERPs using great grand averages.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2015-01-01

    Economic approaches to decision making assume that people attach values to prospective goods and act to maximize their obtained value. Neuroeconomics strives to observe these values directly in the brain. A widely used valuation term in formal learning and decision-making models is the reward prediction error: the value of an outcome relative to its expected value. An influential theory (Holroyd & Coles, 2002) claims that an electrophysiological component, feedback related negativity (FRN), codes a reward prediction error in the human brain. Such a component should be sensitive to both the prior likelihood of reward and its magnitude on receipt. A number of studies have found the FRN to be insensitive to reward magnitude, thus questioning the Holroyd and Coles account. However, because of marked inconsistencies in how the FRN is measured, a meaningful synthesis of this evidence is highly problematic. We conducted a meta-analysis of the FRN's response to both reward magnitude and likelihood using a novel method in which published effect sizes were disregarded in favor of direct measurement of the published waveforms themselves, with these waveforms then averaged to produce "great grand averages." Under this standardized measure, the meta-analysis revealed strong effects of magnitude and likelihood on the FRN, consistent with it encoding a reward prediction error. In addition, it revealed strong main effects of reward magnitude and likelihood across much of the waveform, indicating sensitivity to unsigned prediction errors or "salience." The great grand average technique is proposed as a general method for meta-analysis of event-related potential (ERP).

  18. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  19. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  20. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  1. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required

  2. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  3. Effects of aperture averaging and beam width on a partially coherent Gaussian beam over free-space optical links with turbulence and pointing errors.

    PubMed

    Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali; Liaw, Shien-Kuei

    2016-01-01

    Joint effects of aperture averaging and beam width on the performance of free-space optical communication links, under the impairments of atmospheric loss, turbulence, and pointing errors (PEs), are investigated from an information theory perspective. The propagation of a spatially partially coherent Gaussian-beam wave through a random turbulent medium is characterized, taking into account the diverging and focusing properties of the optical beam as well as the scintillation and beam wander effects. Results show that a noticeable improvement in the average channel capacity can be achieved with an enlarged receiver aperture in the moderate-to-strong turbulence regime, even without knowledge of the channel state information. In particular, it is observed that the optimum beam width can be reduced to improve the channel capacity, albeit the presence of scintillation and PEs, given that either one or both of these adverse effects are least dominant. We show that, under strong turbulence conditions, the beam width increases linearly with the Rytov variance for a relatively smaller PE loss but changes exponentially with steeper increments for higher PE losses. Our findings conclude that the optimal beam width is dependent on the combined effects of turbulence and PEs, and this parameter should be adjusted according to the varying atmospheric channel conditions. Therefore, we demonstrate that the maximum channel capacity is best achieved through the introduction of a larger receiver aperture and a beam-width optimization technique.

  4. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  5. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  6. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  7. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    ERIC Educational Resources Information Center

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  8. Experimental results for absolute cylindrical wavefront testing

    NASA Astrophysics Data System (ADS)

    Reardon, Patrick J.; Alatawi, Ayshah

    2014-09-01

    Applications for Cylindrical and near-cylindrical surfaces are ever-increasing. However, fabrication of high quality cylindrical surfaces is limited by the difficulty of accurate and affordable metrology. Absolute testing of such surfaces represents a challenge to the optical testing community as cylindrical reference wavefronts are difficult to produce. In this paper, preliminary results for a new method of absolute testing of cylindrical wavefronts are presented. The method is based on the merging of the random ball test method with the fiber optic reference test. The random ball test assumes a large number of interferograms of a good quality sphere with errors that are statistically distributed such that the average of the errors goes to zero. The fiber optic reference test utilizes a specially processed optical fiber to provide a clean high quality reference wave from an incident line focus from the cylindrical wave under test. By taking measurements at different rotation and translations of the fiber, an analogous procedure can be employed to determine the quality of the converging cylindrical wavefront with high accuracy. This paper presents and discusses the results of recent tests of this method using a null optic formed by a COTS cylindrical lens and a free-form polished corrector element.

  9. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    SciTech Connect

    Bernardi, G.; McQuinn, M.; Greenhill, L. J.

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  10. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  11. Frequency-domain analysis of absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Svitlov, S.

    2012-12-01

    An absolute gravimeter is analysed as a linear time-invariant system in the frequency domain. Frequency responses of absolute gravimeters are derived analytically based on the propagation of the complex exponential signal through their linear measurement functions. Depending on the model of motion and the number of time-distance coordinates, an absolute gravimeter is considered as a second-order (three-level scheme) or third-order (multiple-level scheme) low-pass filter. It is shown that the behaviour of an atom absolute gravimeter in the frequency domain corresponds to that of the three-level corner-cube absolute gravimeter. Theoretical results are applied for evaluation of random and systematic measurement errors and optimization of an experiment. The developed theory agrees with known results of an absolute gravimeter analysis in the time and frequency domains and can be used for measurement uncertainty analyses, building of vibration-isolation systems and synthesis of digital filtering algorithms.

  12. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  13. Eosinophil count - absolute

    MedlinePlus

    Eosinophils; Absolute eosinophil count ... the white blood cell count to give the absolute eosinophil count. ... than 500 cells per microliter (cells/mcL). Normal value ranges may vary slightly among different laboratories. Talk ...

  14. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  15. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  16. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  17. Absolute biological needs.

    PubMed

    McLeod, Stephen

    2014-07-01

    Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses. PMID:23586876

  18. Absolute biological needs.

    PubMed

    McLeod, Stephen

    2014-07-01

    Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses.

  19. Issues in Absolute Spectral Radiometric Calibration: Intercomparison of Eight Sources

    NASA Technical Reports Server (NTRS)

    Goetz, Alexander F. H.; Kindel, Bruce; Pilewskie, Peter

    1998-01-01

    The application of atmospheric models to AVIRIS and other spectral imaging data to derive surface reflectance requires that the sensor output be calibrated to absolute radiance. Uncertainties in absolute calibration are to be expected, and claims of 92% accuracy have been published. Measurements of accurate surface albedos and cloud absorption to be used in radiative balance calculations depend critically on knowing the absolute spectral-radiometric response of the sensor. The Earth Observing System project is implementing a rigorous program of absolute radiometric calibration for all optical sensors. Since a number of imaging instruments that provide output in terms of absolute radiance are calibrated at different sites, it is important to determine the errors that can be expected among calibration sites. Another question exists about the errors in the absolute knowledge of the exoatmospheric spectral solar irradiance.

  20. The absolute path command

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  1. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  2. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  3. ABSOLUTE POLARIMETRY AT RHIC.

    SciTech Connect

    OKADA; BRAVAR, A.; BUNCE, G.; GILL, R.; HUANG, H.; MAKDISI, Y.; NASS, A.; WOOD, J.; ZELENSKI, Z.; ET AL.

    2007-09-10

    Precise and absolute beam polarization measurements are critical for the RHIC spin physics program. Because all experimental spin-dependent results are normalized by beam polarization, the normalization uncertainty contributes directly to final physics uncertainties. We aimed to perform the beam polarization measurement to an accuracy Of {Delta}P{sub beam}/P{sub beam} < 5%. The absolute polarimeter consists of Polarized Atomic Hydrogen Gas Jet Target and left-right pairs of silicon strip detectors and was installed in the RHIC-ring in 2004. This system features proton-proton elastic scattering in the Coulomb nuclear interference (CNI) region. Precise measurements of the analyzing power A{sub N} of this process has allowed us to achieve {Delta}P{sub beam}/P{sub beam} = 4.2% in 2005 for the first long spin-physics run. In this report, we describe the entire set up and performance of the system. The procedure of beam polarization measurement and analysis results from 2004-2005 are described. Physics topics of AN in the CNI region (four-momentum transfer squared 0.001 < -t < 0.032 (GeV/c){sup 2}) are also discussed. We point out the current issues and expected optimum accuracy in 2006 and the future.

  4. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  5. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  6. Developing control charts to review and monitor medication errors.

    PubMed

    Ciminera, J L; Lease, M P

    1992-03-01

    There is a need to monitor reported medication errors in a hospital setting. Because the quantity of errors vary due to external reporting, quantifying the data is extremely difficult. Typically, these errors are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using absolute (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in time rather than sample averages, and when many successive differences may be zero. PMID:10116719

  7. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  8. Error correction for rotationally asymmetric surface deviation testing based on rotational shears.

    PubMed

    Wang, Weibo; Liu, Pengfei; Xing, Yaolong; Tan, Jiubin; Liu, Jian

    2016-09-10

    We present a practical method for absolute testing of rotationally asymmetric surface deviation based on rotation averaging, additional compensation, and azimuthal errors correction. The errors of angular orders kNθ neglected in the traditional multiangle averaging method can be reconstructed and compensated with the help of least-squares fitting of Zernike polynomials by an additional rotation measurement with a suitable selection of rotation angles. The estimation algorithm adopts the least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The unknown relative alignment of the measurements also can be estimated through the differences in measurement results at overlapping areas. The method proposed combines the advantages of the single-rotation and multiangle averaging methods and realizes a balance between the efficiency and accuracy of the measurements. Experimental results show that the method proposed can obtain high accuracy even with fewer rotation measurements. PMID:27661385

  9. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  10. Reconsideration of measurement of error in human motor learning.

    PubMed

    Crabtree, D A; Antrim, L R

    1988-10-01

    Human motor learning is often measured by error scores. The convention of using mean absolute error, mean constant error, and variable error shows lack of desirable parsimony and interpretability. This paper provides the background of error measurement and states criticisms of conventional methodology. A parsimonious model of error analysis is provided, along with operationalized interpretations and implications for motor learning. Teaching, interpreting, and using error scores in research may be simplified and facilitated with the model.

  11. Absolute neutrino mass measurements

    SciTech Connect

    Wolf, Joachim

    2011-10-06

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.

  12. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  13. Absolute radiometric calibration of the CCRS SAR

    NASA Astrophysics Data System (ADS)

    Ulander, Lars M. H.; Hawkins, Robert K.; Livingstone, Charles E.; Lukowski, Tom I.

    1991-11-01

    Determining the radar scattering coefficients from SAR (synthetic aperture radar) image data requires absolute radiometric calibration of the SAR system. The authors describe an internal calibration methodology for the airborne Canada Centre for Remote Sensing (CCRS) SAR system, based on radar theory, a detailed model of the radar system, and measurements of system parameters. The methodology is verified by analyzing external calibration data acquired over a 6-month period in 1988 by the C-band radar using HH polarization. The results indicate that the overall error is +/- 0.8 dB (1-sigma) for incidence angles +/- 20 deg from antenna boresight. The dominant error contributions are due to the antenna radome and uncertainties in the elevation angle relative to the antenna boresight.

  14. Absolute Identification by Relative Judgment

    ERIC Educational Resources Information Center

    Stewart, Neil; Brown, Gordon D. A.; Chater, Nick

    2005-01-01

    In unidimensional absolute identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of absolute magnitudes. The authors propose an alternative…

  15. Be Resolute about Absolute Value

    ERIC Educational Resources Information Center

    Kidd, Margaret L.

    2007-01-01

    This article explores how conceptualization of absolute value can start long before it is introduced. The manner in which absolute value is introduced to students in middle school has far-reaching consequences for their future mathematical understanding. It begins to lay the foundation for students' understanding of algebra, which can change…

  16. The Carina Project: Absolute and Relative Calibrations

    NASA Astrophysics Data System (ADS)

    Corsi, C. E.; Bono, G.; Walker, A. R.; Brocato, E.; Buonanno, R.; Caputo, F.; Castellani, M.; Castellani, V.; Dall'Ora, M.; Marconi, M.; Monelli, M.; Nonino, M.; Pulone, L.; Ripepi, V.; Smith, H. A.

    We discuss the reduction strategy adopted to perform the relative and the absolute calibration of the Wide Field Imager (WFI) available at the 2.2m ESO/MPI telescope and of the Mosaic Camera (MC) available at the 4m CTIO Blanco telescope. To properly constrain the occurrence of deceptive systematic errors in the relative calibration we observed with each chip the same set of stars. Current photometry seems to suggest that the WFI shows a positional effect when moving from the top to the bottom of individual chips. Preliminary results based on an independent data set collected with the MC suggest that this camera is only marginally affected by the same problem. To perform the absolute calibration we observed with each chip the same set of standard stars. The sample covers a wide color range and the accuracy both in the B and in the V-band appears to be of the order of a few hundredths of magnitude. Finally, we briefly outline the observing strategy to improve both relative and absolute calibrations of mosaic CCD cameras.

  17. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  18. Absolute magnitudes and kinematics of barium stars.

    NASA Astrophysics Data System (ADS)

    Gomez, A. E.; Luri, X.; Grenier, S.; Prevot, L.; Mennessier, M. O.; Figueras, F.; Torra, J.

    1997-03-01

    The absolute magnitude of barium stars has been obtained from kinematical data using a new algorithm based on the maximum-likelihood principle. The method allows to separate a sample into groups characterized by different mean absolute magnitudes, kinematics and z-scale heights. It also takes into account, simultaneously, the censorship in the sample and the errors on the observables. The method has been applied to a sample of 318 barium stars. Four groups have been detected. Three of them show a kinematical behaviour corresponding to disk population stars. The fourth group contains stars with halo kinematics. The luminosities of the disk population groups spread a large range. The intrinsically brightest one (M_v_=-1.5mag, σ_M_=0.5mag) seems to be an inhomogeneous group containing barium binaries as well as AGB single stars. The most numerous group (about 150 stars) has a mean absolute magnitude corresponding to stars in the red giant branch (M_v_=0.9mag, σ_M_=0.8mag). The third group contains barium dwarfs, the obtained mean absolute magnitude is characteristic of stars on the main sequence or on the subgiant branch (M_v_=3.3mag, σ_M_=0.5mag). The obtained mean luminosities as well as the kinematical results are compatible with an evolutionary link between barium dwarfs and classical barium giants. The highly luminous group is not linked with these last two groups. More high-resolution spectroscopic data will be necessary in order to better discriminate between barium and non-barium stars.

  19. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  20. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  1. Singular perturbation of absolute stability.

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1972-01-01

    It was previously shown (author, 1969) that the regions of absolute stability in the parameter space can be determined when the parameters appear on the right-hand side of the system equations, i.e., the regular case. Here, the effect on absolute stability of a small parameter attached to higher derivatives in the equations (the singular case) is studied. The Lur'e-Postnikov class of nonlinear systems is considered.

  2. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  3. Paradoxes in Averages.

    ERIC Educational Resources Information Center

    Mitchem, John

    1989-01-01

    Examples used to illustrate Simpson's paradox for secondary students include probabilities, university admissions, batting averages, student-faculty ratios, and average and expected class sizes. Each result is explained. (DC)

  4. Absolute Radiometer for Reproducing the Solar Irradiance Unit

    NASA Astrophysics Data System (ADS)

    Sapritskii, V. I.; Pavlovich, M. N.

    1989-01-01

    A high-precision absolute radiometer with a thermally stabilized cavity as receiving element has been designed for use in solar irradiance measurements. The State Special Standard of the Solar Irradiance Unit has been built on the basis of the developed absolute radiometer. The Standard also includes the sun tracking system and the system for automatic thermal stabilization and information processing, comprising a built-in microcalculator which calculates the irradiance according to the input program. During metrological certification of the Standard, main error sources have been analysed and the non-excluded systematic and accidental errors of the irradiance-unit realization have been determined. The total error of the Standard does not exceed 0.3%. Beginning in 1984 the Standard has been taking part in a comparison with the Å 212 pyrheliometer and other Soviet and foreign standards. In 1986 it took part in the international comparison of absolute radiometers and standard pyrheliometers of socialist countries. The results of the comparisons proved the high metrological quality of this Standard based on an absolute radiometer.

  5. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  6. Clinical implementation and error sensitivity of a 3D quality assurance protocol for prostate and thoracic IMRT.

    PubMed

    Gueorguiev, Gueorgui; Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed

    2015-01-01

    This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity-modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step-and-shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PMID:26699299

  7. Absolute radiometric calibration of the Thematic Mapper

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Biggar, S. F.; Holm, R. G.; Jackson, R. D.; Mao, Y.

    1986-01-01

    Calibration data for the solar reflective bands of the Landsat-5 TM obtained from five in-flight absolute radiometric calibrations from July 1984-November 1985 at White Sands, New Mexico are presented and analyzed. Ground reflectance and atmospheric data were utilized to predict the spectral radiance at the entrance pupil of the TM and the average number of digital counts in each TM band. The calibration of each of the TM solar reflective bands was calculated in terms of average digital counts/unit spectral radiance for each band. It is observed that for the 12 reflectance-based measurements the rms variation from the means as a percentage of the mean is + or - 1.9 percent; for the 11 measurements in the IR bands, it is + or - 3.4 percent; and the rms variation for all 23 measurements is + or - 2.8 percent.

  8. Absolute Radiometric Calibration of EUNIS-06

    NASA Technical Reports Server (NTRS)

    Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.

    2007-01-01

    The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.

  9. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  10. Absolute geostrophic currents in global tropical oceans

    NASA Astrophysics Data System (ADS)

    Yang, Lina; Yuan, Dongliang

    2016-11-01

    A set of absolute geostrophic current (AGC) data for the period January 2004 to December 2012 are calculated using the P-vector method based on monthly gridded Argo profiles in the world tropical oceans. The AGCs agree well with altimeter geostrophic currents, Ocean Surface Current Analysis-Real time currents, and moored current-meter measurements at 10-m depth, based on which the classical Sverdrup circulation theory is evaluated. Calculations have shown that errors of wind stress calculation, AGC transport, and depth ranges of vertical integration cannot explain non-Sverdrup transport, which is mainly in the subtropical western ocean basins and equatorial currents near the Equator in each ocean basin (except the North Indian Ocean, where the circulation is dominated by monsoons). The identified non-Sverdrup transport is thereby robust and attributed to the joint effect of baroclinicity and relief of the bottom (JEBAR) and mesoscale eddy nonlinearity.

  11. Absolute flux scale for radioastronomy

    SciTech Connect

    Ivanov, V.P.; Stankevich, K.S.

    1986-07-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized.

  12. Absolute Proper Motions of Southern Globular Clusters

    NASA Astrophysics Data System (ADS)

    Dinescu, D. I.; Girard, T. M.; van Altena, W. F.

    1996-05-01

    Our program involves the determination of absolute proper motions with respect to galaxies for a sample of globular clusters situated in the southern sky. The plates cover a 6(deg) x 6(deg) area and are taken with the 51-cm double astrograph at Cesco Observatory in El Leoncito, Argentina. We have developed special methods to deal with the modelling error of the plate transformation and we correct for magnitude equation using the cluster stars. This careful astrometric treatment leads to accuracies of from 0.5 to 1.0 mas/yr for the absolute proper motion of each cluster, depending primarily on the number of measurable cluster stars which in turn is related to the cluster's distance. Space velocities are then derived which, in association with metallicities, provide key information for the formation scenario of the Galaxy, i.e. accretion and/or dissipational collapse. Here we present results for NGC 1851, NGC 6752, NGC 6584, NGC 6362 and NGC 288.

  13. Assessment of absolute added correlative coding in optical intensity modulation and direct detection channels

    NASA Astrophysics Data System (ADS)

    Dong-Nhat, Nguyen; Elsherif, Mohamed A.; Malekmohammadi, Amin

    2016-06-01

    The performance of absolute added correlative coding (AACC) modulation format with direct detection has been numerically and analytically reported, targeting metro data center interconnects. Hereby, the focus lies on the performance of the bit error rate, noise contributions, spectral efficiency, and chromatic dispersion tolerance. The signal space model of AACC, where the average electrical and optical power expressions are derived for the first time, is also delineated. The proposed modulation format was also compared to other well-known signaling, such as on-off-keying (OOK) and four-level pulse-amplitude modulation, at the same bit rate in a directly modulated vertical-cavity surface-emitting laser-based transmission system. The comparison results show a clear advantage of AACC in achieving longer fiber delivery distance due to the higher dispersion tolerance.

  14. Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system

    NASA Astrophysics Data System (ADS)

    Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.

    2016-11-01

    An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.

  15. Current Absolute Plate Velocities Inferred from Hotspot Tracks, Comparison with Absolute Velocities Inferred from Seismic Anisotropy, and Bounds on Rates of Motion Between Groups of Hotspots

    NASA Astrophysics Data System (ADS)

    Wang, C.; Gordon, R. G.; Zheng, L.

    2015-12-01

    Hotspot tracks have been widely used to estimate the velocities of the plate relative to the lower mantle. Here we analyze the hotspot azimuth data set of Morgan and Phipps Morgan [2007] and show that the errors in plate velocity azimuths inferred from hotspot tracks in any one plate are correlated with the errors of other azimuths in the same plate. We use a two-tier analysis to account for this correlated error. First, we determine an individual best-fitting pole for each plate. Second, we determine the absolute plate velocity by minimizing the misfit while constrained by the MORVEL relative plate velocities [DeMets et al. 2010]. Our preferred model, HS4-MORVEL, uses azimuths from 9 major plates, which are weighted equally. We find that the Pacific plate rotates 0.860.016°Ma-1 right handed about 63.3°S, 96.1°E. Angular velocities of four plates (Amur, Eurasia, Yangtze and Antarctic) differ insignificantly from zero. The net rotation of the lithosphere is 0.24°±0.014° Ma-1 right handed about 52.3S, 56.9E. The angular velocities differ insignificantly from the absolute angular velocities inferred from the orientation of seismic anisotropy [Zheng et al. 2014]. The within-plate dispersion of hotspot track azimuths is 14°, which is comparable to the within-plate dispersion found from orientations of seismic anisotropy. The between-plate dispersion is 6.9±2.4° (95% confidence limits), which is smaller than that found from seismic anisotropy. The between-plate dispersion of 4.5° to 9.3° can be used to place bounds on how fast hotspots under one plate move relative to hotspots under another plate. For an average plate absolute speed of ≈50 mm/yr, the between-plate dispersion indicates a rate of motion of 4 mm/yr to 8 mm/yr for the component of hotspot motion perpendicular to plate motion. This upper bound is consistent with prior work that indicated upper bounds on motion between Pacific hotspots and Indo-Atlantic hotspots over the past 48 Ma of 8-13 mm

  16. Relativistic Absolutism in Moral Education.

    ERIC Educational Resources Information Center

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  17. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  18. Absolute Magnitudes of Pan-STARRS PS1 Asteroids

    NASA Astrophysics Data System (ADS)

    Veres, Peter; Jedicke, R.; Fitzsimmons, A.; Denneau, L.; Wainscoat, R.; Bolin, B.; PS1SC Collaboration

    2013-10-01

    Absolute magnitude (H) of an asteroid is a fundamental parameter describing the size and the apparent brightness of the body. Because of its surface shape, properties and changing illumination, the brightness changes with the geometry and is described by the phase function governed by the slope parameter (G). Although many years have been spent on detailed observations of individual asteroids to provide H and G, vast majority of minor planets have H based on assumed G and due to the input photometry from multiple sources the errors of these values are unknown. We compute H of ~ 180 000 and G of few thousands asteroids observed with the Pan-STARRS PS1 telescope in well defined photometric systems. The mean photometric error is 0.04 mag. Because on average there are only 7 detections per asteroid in our sample, we employed a Monte Carlo (MC) technique to generate clones simulating all possible rotation periods, amplitudes and colors of detected asteroids. Known asteroid colors were taken from the SDSS database. We used debiased spin and amplitude distributions dependent on size, spectral class distributions of asteroids dependent on semi-major axis and starting values of G from previous works. H and G (G12 respectively) were derived by phase functions by Bowell et al. (1989) and Muinonen et al. (2010). We confirmed that there is a positive systematic offset between H based on PS1 asteroids and Minor Planet Center database up to -0.3 mag peaking at 14. Similar offset was first mentioned in the analysis of SDSS asteroids and was believed to be solved by weighting and normalizing magnitudes by observatory codes. MC shows that there is only a negligible difference between Bowell's and Muinonen's solution of H. However, Muinonen's phase function provides smaller errors on H. We also derived G and G12 for thousands of asteroids. For known spectral classes, slope parameters agree with the previous work in general, however, the standard deviation of G in our sample is twice

  19. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  20. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  1. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  2. Mathematical Model for Absolute Magnetic Measuring Systems in Industrial Applications

    NASA Astrophysics Data System (ADS)

    Fügenschuh, Armin; Fügenschuh, Marzena; Ludszuweit, Marina; Mojsic, Aleksandar; Sokół, Joanna

    2015-09-01

    Scales for measuring systems are either based on incremental or absolute measuring methods. Incremental scales need to initialize a measurement cycle at a reference point. From there, the position is computed by counting increments of a periodic graduation. Absolute methods do not need reference points, since the position can be read directly from the scale. The positions on the complete scales are encoded using two incremental tracks with different graduation. We present a new method for absolute measuring using only one track for position encoding up to micrometre range. Instead of the common perpendicular magnetic areas, we use a pattern of trapezoidal magnetic areas, to store more complex information. For positioning, we use the magnetic field where every position is characterized by a set of values measured by a hall sensor array. We implement a method for reconstruction of absolute positions from the set of unique measured values. We compare two patterns with respect to uniqueness, accuracy, stability and robustness of positioning. We discuss how stability and robustness are influenced by different errors during the measurement in real applications and how those errors can be compensated.

  3. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation

  4. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  5. Moral absolutism and ectopic pregnancy.

    PubMed

    Kaczor, C

    2001-02-01

    If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.

  6. Moral absolutism and ectopic pregnancy.

    PubMed

    Kaczor, C

    2001-02-01

    If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate. PMID:11262641

  7. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  8. Classification images predict absolute efficiency.

    PubMed

    Murray, Richard F; Bennett, Patrick J; Sekuler, Allison B

    2005-02-24

    How well do classification images characterize human observers' strategies in perceptual tasks? We show mathematically that from the classification image of a noisy linear observer, it is possible to recover the observer's absolute efficiency. If we could similarly predict human observers' performance from their classification images, this would suggest that the linear model that underlies use of the classification image method is adequate over the small range of stimuli typically encountered in a classification image experiment, and that a classification image captures most important aspects of human observers' performance over this range. In a contrast discrimination task and in a shape discrimination task, we found that observers' absolute efficiencies were generally well predicted by their classification images, although consistently slightly (approximately 13%) higher than predicted. We consider whether a number of plausible nonlinearities can account for the slight under prediction, and of these we find that only a form of phase uncertainty can account for the discrepancy.

  9. Time-resolved Absolute Velocity Quantification with Projections

    PubMed Central

    Langham, Michael C.; Jain, Varsha; Magland, Jeremy F.; Wehrli, Felix W.

    2010-01-01

    Quantitative information on time-resolved blood velocity along the femoral/popliteal artery can provide clinical information on peripheral arterial disease and complement MR angiography since not all stenoses are hemodynamically significant. The key disadvantages of the most widely used approach to time-resolve pulsatile blood flow by cardiac-gated velocity-encoded gradient-echo imaging are gating errors and long acquisition time. Here we demonstrate a rapid non-triggered method that quantifies absolute velocity on the basis of phase difference between successive velocity-encoded projections after selectively removing the background static tissue signal via a reference image. The tissue signal from the reference image’s center k-space line is isolated by masking out the vessels in the image domain. The performance of the technique, in terms of reproducibility and agreement with results obtained with conventional phase contrast (PC)-MRI was evaluated at 3T field strength with a variable-flow rate phantom and in vivo of the triphasic velocity waveforms at several segments along the femoral and popliteal arteries. Additionally, time-resolved flow velocity was quantified in five healthy subjects and compared against gated PC-MRI results. To illustrate clinical feasibility the proposed method was shown to be able to identify hemodynamic abnormalities and impaired reactivity in a diseased femoral artery. For both phantom and in vivo studies, velocity measurements were within 1.5 cm/s and the coefficient of variation was less than 5% in an in vivo reproducibility study. In five healthy subjects, the average differences in mean peak velocities and their temporal locations were within 1 cm/s and 10 ms compared to gated PC-MRI. In conclusion, the proposed method provides temporally-resolved arterial velocity with a temporal resolution of 20 ms with minimal post-processing. PMID:20677235

  10. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  11. Absolute magnitudes of asteroids and a revision of asteroid albedo estimates from WISE thermal observations

    NASA Astrophysics Data System (ADS)

    Pravec, Petr; Harris, Alan W.; Kušnirák, Peter; Galád, Adrián; Hornoch, Kamil

    2012-09-01

    We obtained estimates of the Johnson V absolute magnitudes (H) and slope parameters (G) for 583 main-belt and near-Earth asteroids observed at Ondřejov and Table Mountain Observatory from 1978 to 2011. Uncertainties of the absolute magnitudes in our sample are <0.21 mag, with a median value of 0.10 mag. We compared the H data with absolute magnitude values given in the MPCORB, Pisa AstDyS and JPL Horizons orbit catalogs. We found that while the catalog absolute magnitudes for large asteroids are relatively good on average, showing only little biases smaller than 0.1 mag, there is a systematic offset of the catalog values for smaller asteroids that becomes prominent in a range of H greater than ∼10 and is particularly big above H ∼ 12. The mean (Hcatalog - H) value is negative, i.e., the catalog H values are systematically too bright. This systematic negative offset of the catalog values reaches a maximum around H = 14 where the mean (Hcatalog - H) is -0.4 to -0.5. We found also smaller correlations of the offset of the catalog H values with taxonomic types and with lightcurve amplitude, up to ∼0.1 mag or less. We discuss a few possible observational causes for the observed correlations, but the reason for the large bias of the catalog absolute magnitudes peaking around H = 14 is unknown; we suspect that the problem lies in the magnitude estimates reported by asteroid surveys. With our photometric H and G data, we revised the preliminary WISE albedo estimates made by Masiero et al. (Masired, J.R. et al. [2011]. Astrophys. J. 741, 68-89) and Mainzer et al. (Mainzer, A. et al. [2011b]. Astrophys. J. 743, 156-172) for asteroids in our sample. We found that the mean geometric albedo of Tholen/Bus/DeMeo C/G/B/F/P/D types with sizes of 25-300 km is pV = 0.057 with the standard deviation (dispersion) of the sample of 0.013 and the mean albedo of S/A/L types with sizes 0.6-200 km is 0.197 with the standard deviation of the sample of 0.051. The standard errors of the

  12. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

    NASA Astrophysics Data System (ADS)

    Amoush, Ahmad

    The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

  13. The average enzyme principle.

    PubMed

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-09-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This "average enzyme principle" provides a natural methodology for jointly studying metabolism and its regulation.

  14. Phase Errors and the Capture Effect

    SciTech Connect

    Blair, J., and Machorro, E.

    2011-11-01

    This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

  15. The AFGL absolute gravity program

    NASA Technical Reports Server (NTRS)

    Hammond, J. A.; Iliff, R. L.

    1978-01-01

    A brief discussion of the AFGL's (Air Force Geophysics Laboratory) program in absolute gravity is presented. Support of outside work and in-house studies relating to gravity instrumentation are discussed. A description of the current transportable system is included and the latest results are presented. These results show good agreement with measurements at the AFGL site by an Italian system. The accuracy obtained by the transportable apparatus is better than 0.1 microns sq sec 10 microgal and agreement with previous measurements is within the combined uncertainties of the measurements.

  16. Familial Aggregation of Absolute Pitch

    PubMed Central

    Baharloo, Siamak; Service, Susan K.; Risch, Neil; Gitschier, Jane; Freimer, Nelson B.

    2000-01-01

    Absolute pitch (AP) is a behavioral trait that is defined as the ability to identify the pitch of tones in the absence of a reference pitch. AP is an ideal phenotype for investigation of gene and environment interactions in the development of complex human behaviors. Individuals who score exceptionally well on formalized auditory tests of pitch perception are designated as “AP-1.” As described in this report, auditory testing of siblings of AP-1 probands and of a control sample indicates that AP-1 aggregates in families. The implications of this finding for the mapping of loci for AP-1 predisposition are discussed. PMID:10924408

  17. Absolute reliability of isokinetic knee flexion and extension measurements adopting a prone position.

    PubMed

    Ayala, F; De Ste Croix, M; Sainz de Baranda, P; Santonja, F

    2013-01-01

    The main purpose of this study was to determine the absolute and relative reliability of isokinetic peak torque (PT), angle of peak torque (APT), average power (PW) and total work (TW) for knee flexion and extension during concentric and eccentric actions measured in a prone position at 60, 180 and 240° s(-1). A total of 50 recreational athletes completed the study. PT, APT, PW and TW for concentric and eccentric knee extension and flexion were recorded at three different angular velocities (60, 180 and 240° s(-1)) on three different occasions with a 72- to 96-h rest interval between consecutive testing sessions. Absolute reliability was examined through typical percentage error (CV(TE)), percentage change in the mean (ChM) and relative reliability with intraclass correlations (ICC(3,1)). For both the knee extensor and flexor muscle groups, all strength data (except APT during knee flexion movements) demonstrated moderate absolute reliability (ChM < 3%; ICCs > 0·70; and CV(TE) < 20%) independent of the knee movement (flexion and extension), type of muscle action (concentric and eccentric) and angular velocity (60, 180 and 240° s(-1)). Therefore, the current study suggests that the CV(TE) values reported for PT (8-20%), APT (8-18%) (only during knee extension movements), PW (14-20%) and TW (12-28%) may be acceptable to detect the large changes usually observed after rehabilitation programmes, but not acceptable to examine the effect of preventative training programmes in healthy individuals.

  18. MODEL AVERAGING BASED ON KULLBACK-LEIBLER DISTANCE

    PubMed Central

    Zhang, Xinyu; Zou, Guohua; Carroll, Raymond J.

    2016-01-01

    This paper proposes a model averaging method based on Kullback-Leibler distance under a homoscedastic normal error term. The resulting model average estimator is proved to be asymptotically optimal. When combining least squares estimators, the model average estimator is shown to have the same large sample properties as the Mallows model average (MMA) estimator developed by Hansen (2007). We show via simulations that, in terms of mean squared prediction error and mean squared parameter estimation error, the proposed model average estimator is more efficient than the MMA estimator and the estimator based on model selection using the corrected Akaike information criterion in small sample situations. A modified version of the new model average estimator is further suggested for the case of heteroscedastic random errors. The method is applied to a data set from the Hong Kong real estate market.

  19. Fiber-optic large area average temperature sensor

    SciTech Connect

    Looney, L.L.; Forman, P.R.

    1994-05-01

    In many instances the desired temperature measurement is only the spatial average temperature over a large area; eg. ground truth calibration for satellite imaging system, or average temperature of a farm field. By making an accurate measurement of the optical length of a long fiber-optic cable, we can determine the absolute temperature averaged over its length and hence the temperature of the material in contact with it.

  20. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  1. [Errors Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].

    PubMed

    Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua

    2016-01-01

    High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method.

  2. [Errors Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].

    PubMed

    Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua

    2016-01-01

    High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method. PMID:27228765

  3. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  4. Analysis of errors in medical rapid prototyping models.

    PubMed

    Choi, J Y; Choi, J H; Kim, N K; Kim, Y; Lee, J K; Kim, M K; Lee, J H; Kim, M J

    2002-02-01

    Rapid prototyping (RP) is a relatively new technology that produces physical models by selectively solidifying UV-sensitive liquid resin using a laser beam. The technology has gained a great amount of attention, particularly in oral and maxillofacial surgery. An important issue in RP applications in this field is how to obtain RP models of the required accuracy. We investigated errors generated during the production of medical RP models, and identified the factors that caused dimensional errors in each production phase. The errors were mainly due to the volume-averaging effect, threshold value, and difficulty in the exact replication of landmark locations. We made 16 linear measurements on a dry skull, a replicated three-dimensional (3-D) visual (STL) model, and an RP model. The results showed that the absolute mean deviation between the original dry skull and the RP model over the 16 linear measurements was 0.62 +/- 0.35 mm (0.56 +/- 0.39%), which is smaller than values reported in previous studies. A major emphasis is placed on the dumb-bell effect. Classifying measurements as internal and external measurements, we observed that the effect of an inadequate threshold value differs with the type of measurement.

  5. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < ‑1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  6. Apparatus for absolute pressure measurement

    NASA Technical Reports Server (NTRS)

    Hecht, R. (Inventor)

    1969-01-01

    An absolute pressure sensor (e.g., the diaphragm of a capacitance manometer) was subjected to a superimposed potential to effectively reduce the mechanical stiffness of the sensor. This substantially increases the sensitivity of the sensor and is particularly useful in vacuum gauges. An oscillating component of the superimposed potential induced vibrations of the sensor. The phase of these vibrations with respect to that of the oscillating component was monitored, and served to initiate an automatic adjustment of the static component of the superimposed potential, so as to bring the sensor into resonance at the frequency of the oscillating component. This establishes a selected sensitivity for the sensor, since a definite relationship exists between resonant frequency and sensitivity.

  7. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < -1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  8. Absolute Plate Velocities from Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Kreemer, Corné; Zheng, Lin; Gordon, Richard

    2015-04-01

    The orientation of seismic anisotropy inferred beneath plate interiors may provide a means to estimate the motions of the plate relative to the sub-asthenospheric mantle. Here we analyze two global sets of shear-wave splitting data, that of Kreemer [2009] and an updated and expanded data set, to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. We also explore the effect of using geologically current plate velocities (i.e., the MORVEL set of angular velocities [DeMets et al. 2010]) compared with geodetically current plate velocities (i.e., the GSRM v1.2 angular velocities [Kreemer et al. 2014]). We demonstrate that the errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. The SKS-MORVEL absolute plate angular velocities (based on the Kreemer [2009] data set) are determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11° Ma-1 (95% confidence limits) right-handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2° ) differs insignificantly from that for continental lithosphere (σ=21.6° ). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4° ) than for continental

  9. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  10. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  11. Error growth in operational ECMWF forecasts

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Dalcher, A.

    1985-01-01

    A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of error growth on forecast model deficiencies. Error was defined as the difference between the forecast and analysis fields during the verification time. Systematic and random errors were considered separately in calculating the error variance for a 10 day operational forecast. A good fit was obtained with measured forecast errors and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast errors and differences that were performed separately for each wavenumber revealed that the error growth rate grew with wavenumber. The saturation error decreased with the total wavenumber and the limit of predictability, i.e., when error variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.

  12. On the Absolute Age of the Metal-rich Globular M71 (NGC 6838). I. Optical Photometry

    NASA Astrophysics Data System (ADS)

    Di Cecco, A.; Bono, G.; Prada Moroni, P. G.; Tognelli, E.; Allard, F.; Stetson, P. B.; Buonanno, R.; Ferraro, I.; Iannicola, G.; Monelli, M.; Nonino, M.; Pulone, L.

    2015-08-01

    We investigated the absolute age of the Galactic globular cluster M71 (NGC 6838) using optical ground-based images (u\\prime ,g\\prime ,r\\prime ,i\\prime ,z\\prime ) collected with the MegaCam camera at the Canada-France-Hawaii Telescope (CFHT). We performed a robust selection of field and cluster stars by applying a new method based on the 3D (r\\prime ,u\\prime -g\\prime ,g\\prime -r\\prime ) color-color-magnitude diagram. A comparison between the color-magnitude diagram (CMD) of the candidate cluster stars and a new set of isochrones at the locus of the main sequence turn-off (MSTO) suggests an absolute age of 12 ± 2 Gyr. The absolute age was also estimated using the difference in magnitude between the MSTO and the so-called main sequence knee, a well-defined bending occurring in the lower main sequence. This feature was originally detected in the near-infrared bands and explained as a consequence of an opacity mechanism (collisionally induced absorption of molecular hydrogen) in the atmosphere of cool low-mass stars. The same feature was also detected in the r‧, u\\prime -g\\prime , and in the r\\prime ,g\\prime -r\\prime CMD, thus supporting previous theoretical predictions by Borysow et al. The key advantage in using the {{{Δ }}}{TO}{Knee} as an age diagnostic is that it is independent of uncertainties affecting the distance, the reddening, and the photometric zero point. We found an absolute age of 12 ± 1 Gyr that agrees, within the errors, with similar age estimates, but the uncertainty is on average a factor of two smaller. We also found that the {{{Δ }}}{TO}{Knee} is more sensitive to the metallicity than the MSTO, but the dependence vanishes when using the difference in color between the MSK and the MSTO.

  13. Sounding rocket measurement of the absolute solar EUV flux utilizing a silicon photodiode

    NASA Technical Reports Server (NTRS)

    Ogawa, H. S.; Mcmullin, D.; Judge, D. L.; Canfield, L. R.

    1990-01-01

    A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated absolute magnitude of the solar extreme UV photon flux in the spectral region between 50 and 800 A. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated absolute value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 x 10 to the 10th photons/sq cm per s. Based on a nominal probable error of 7 percent for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-A region (5 percent on longer wavelength measurements between 500 and 1216 A), and based on experimental errors associated with the present rocket instrumentation and analysis, a conservative total error estimate of about 14 percent is assigned to the absolute integral solar flux obtained.

  14. Precision goniometer equipped with a 22-bit absolute rotary encoder.

    PubMed

    Xiaowei, Z; Ando, M; Jidong, W

    1998-05-01

    The calibration of a compact precision goniometer equipped with a 22-bit absolute rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten times larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the absolute encoder is approximately 18 bit due to systematic errors.

  15. Flow rate calibration for absolute cell counting rationale and design.

    PubMed

    Walker, Clare; Barnett, David

    2006-05-01

    There is a need for absolute leukocyte enumeration in the clinical setting, and accurate, reliable (and affordable) technology to determine absolute leukocyte counts has been developed. Such technology includes single platform and dual platform approaches. Derivations of these counts commonly incorporate the addition of a known number of latex microsphere beads to a blood sample, although it has been suggested that the addition of beads to a sample may only be required to act as an internal quality control procedure for assessing the pipetting error. This unit provides the technical details for undertaking flow rate calibration that obviates the need to add reference beads to each sample. It is envisaged that this report will provide the basis for subsequent clinical evaluations of this novel approach. PMID:18770842

  16. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  17. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  18. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  19. The absolute amplitude calibration of the SEASAT synthetic aperture radar - An intercomparison with other L-band radar systems

    NASA Technical Reports Server (NTRS)

    Held, D.; Werner, C.; Wall, S.

    1983-01-01

    The absolute amplitude calibration of the spaceborne Seasat SAR data set is presented based on previous relative calibration studies. A scale factor making it possible to express the perceived radar brightness of a scene in units of sigma-zero is established. The system components are analyzed for error contribution, and the calibration techniques are introduced for each stage. These include: A/D converter saturation tests; prevention of clipping in the processing step; and converting the digital image into the units of received power. Experimental verification was performed by screening and processing the data of the lava flow surrounding the Pisgah Crater in Southern California, for which previous C-130 airborne scatterometer data were available. The average backscatter difference between the two data sets is estimated to be 2 dB in the brighter, and 4 dB in the dimmer regions. For the SAR a calculated uncertainty of 3 dB is expected.

  20. Absolute configuration of isovouacapenol C

    PubMed Central

    Fun, Hoong-Kun; Yodsaoue, Orapun; Karalai, Chatchanok; Chantrapromma, Suchada

    2010-01-01

    The title compound, C27H34O5 {systematic name: (4aR,5R,6R,6aS,7R,11aS,11bR)-4a,6-dihy­droxy-4,4,7,11b-tetra­methyl-1,2,3,4,4a,5,6,6a,7,11,11a,11b-dodeca­hydro­phenanthro[3,2-b]furan-5-yl benzoate}, is a cassane furan­oditerpene, which was isolated from the roots of Caesalpinia pulcherrima. The three cyclo­hexane rings are trans fused: two of these are in chair conformations with the third in a twisted half-chair conformation, whereas the furan ring is almost planar (r.m.s. deviation = 0.003 Å). An intra­molecular C—H⋯O inter­action generates an S(6) ring. The absolute configurations of the stereogenic centres at positions 4a, 5, 6, 6a, 7, 11a and 11b are R, R, R, S, R, S and R, respectively. In the crystal, mol­ecules are linked into infinite chains along [010] by O—H⋯O hydrogen bonds. C⋯O [3.306 (2)–3.347 (2) Å] short contacts and C—H⋯π inter­actions also occur. PMID:21588364

  1. Measuring Postglacial Rebound with GPS and Absolute Gravity

    NASA Technical Reports Server (NTRS)

    Larson, Kristine M.; vanDam, Tonie

    2000-01-01

    We compare vertical rates of deformation derived from continuous Global Positioning System (GPS) observations and episodic measurements of absolute gravity. We concentrate on four sites in a region of North America experiencing postglacial rebound. The rates of uplift from gravity and GPS agree within one standard deviation for all sites. The GPS vertical deformation rates are significantly more precise than the gravity rates, primarily because of the denser temporal spacing provided by continuous GPS tracking. We conclude that continuous GPS observations are more cost efficient and provide more precise estimates of vertical deformation rates than campaign style gravity observations where systematic errors are difficult to quantify.

  2. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  3. A Comparison of Error Correction Procedures on Word Reading

    ERIC Educational Resources Information Center

    Syrek, Andrea L.; Hixson, Micheal D.; Jacob, Susan; Morgan, Sandra

    2007-01-01

    The effectiveness and efficiency of two error correction procedures on word reading were compared. Three students with below average reading skills and one student with average reading skills were provided with weekly instruction on sets of 20 unknown words. Students' errors during instruction were followed by either word supply error correction…

  4. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is

  5. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  6. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  7. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  8. Updated Absolute Flux Calibration of the COS FUV Modes

    NASA Astrophysics Data System (ADS)

    Massa, D.; Ely, J.; Osten, R.; Penton, S.; Aloisi, A.; Bostroem, A.; Roman-Duval, J.; Proffitt, C.

    2014-03-01

    We present newly derived point source absolute flux calibrations for the COS FUV modes at both the original and second lifetime positions. The analysis includes observa- tions through the Primary Science Aperture (PSA) of the standard stars WD0308-565, GD71, WD1057+729 and WD0947+857 obtained as part of two calibration programs. Data were were obtained for all of the gratings at all of the original CENWAVE settings at both the original and second lifetime positions and for the G130M CENWAVE = 1222 at the second lifetime position. Data were also obtained with the FUVB segment for the G130M CENWAVE = 1055 and 1096 setting at the second lifetime position. We also present the derivation of L-flats that were used in processing the data and show that the internal consistency of the primary standards is 1%. The accuracy of the absolute flux calibrations over the UV are estimated to be 1-2% for the medium resolution gratings, and 2-3% over most of the wavelength range of the G140L grating, although the uncertainty can be as large as 5% or more at some G140L wavelengths. We note that these errors are all relative to the optical flux near the V band and small additional errors may be present due to inaccuracies in the V band calibration. In addition, these error estimates are for the time at which the flux calibration data were obtained; the accuracy of the flux calibration at other times can be affected by errors in the time dependent sensitivity (TDS) correction.

  9. Absolute Income, Relative Income, and Happiness

    ERIC Educational Resources Information Center

    Ball, Richard; Chernova, Kateryna

    2008-01-01

    This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in absolute terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both absolute and relative income are positively and significantly…

  10. Investigating Absolute Value: A Real World Application

    ERIC Educational Resources Information Center

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  11. Preschoolers' Success at Coding Absolute Size Values.

    ERIC Educational Resources Information Center

    Russell, James

    1980-01-01

    Forty-five 2-year-old and forty-five 3-year-old children coded relative and absolute sizes using 1.5-inch, 6-inch, and 18-inch cardboard squares. Results indicate that absolute coding is possible for children of this age. (Author/RH)

  12. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  13. Monolithically integrated absolute frequency comb laser system

    DOEpatents

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  14. Estimating the absolute wealth of households

    PubMed Central

    Gerkey, Drew; Hadley, Craig

    2015-01-01

    Abstract Objective To estimate the absolute wealth of households using data from demographic and health surveys. Methods We developed a new metric, the absolute wealth estimate, based on the rank of each surveyed household according to its material assets and the assumed shape of the distribution of wealth among surveyed households. Using data from 156 demographic and health surveys in 66 countries, we calculated absolute wealth estimates for households. We validated the method by comparing the proportion of households defined as poor using our estimates with published World Bank poverty headcounts. We also compared the accuracy of absolute versus relative wealth estimates for the prediction of anthropometric measures. Findings The median absolute wealth estimates of 1 403 186 households were 2056 international dollars per capita (interquartile range: 723–6103). The proportion of poor households based on absolute wealth estimates were strongly correlated with World Bank estimates of populations living on less than 2.00 United States dollars per capita per day (R2 = 0.84). Absolute wealth estimates were better predictors of anthropometric measures than relative wealth indexes. Conclusion Absolute wealth estimates provide new opportunities for comparative research to assess the effects of economic resources on health and human capital, as well as the long-term health consequences of economic change and inequality. PMID:26170506

  15. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  16. Impact of measurement error on testing genetic association with quantitative traits.

    PubMed

    Liao, Jiemin; Li, Xiang; Wong, Tien-Yin; Wang, Jie Jin; Khor, Chiea Chuen; Tai, E Shyong; Aung, Tin; Teo, Yik-Ying; Cheng, Ching-Yu

    2014-01-01

    Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS) of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD) in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10(-5)) for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable. PMID:24475218

  17. An Investigation of Mars NIR Spectral Features using Absolutely Calibrated Images

    NASA Astrophysics Data System (ADS)

    Klassen, D. R.; Bell, J. F., III

    1998-09-01

    We used the NSFCAM 256x256 InSb array camera at the NASA Infrared Telescope Facility to gather near-infrared (NIR) spectral image sets of Mars through the 1995 opposition. In previous studies with these data [1-6] we noted several interesting spectral features, some of which are diagnostic volatile absorption bands that allow the discrimination between CO_2 or H_2O ices. Band depth maps of these regions show polar and morning and evening limb ices composed of water and some indication of polar CO_2 ices. Other features, near 3.33 and 3.4\\micron, appear to be confined to particular geographic regions; specifically Syrtis Major. However, the images used in these previous studies were calibrated to either the disk average or only to a rough scaled reflectance by simple division by solar-type star data gathered at the same time as the images. This only allowed determinations of spectral features either relative to some global average of the feature, or to some unit not directly comparable to other published data. For at least three of our observation nights the conditions and data are sufficient to absolutely calibrate the images to radiance factors. For this work we reinvestigate the spectra and band depth mapping results using these absolutely calibrated images. In general we find that bright regions have peak radiance factors of 0.5 to 0.6 at 2.25\\micron\\ and 0.3 to 0.4 at 3.5\\micron; dark regions have radiance factors of 0.2 to 0.25 at 2.25\\micron\\ and 0.1 to 0.15 at 3.5\\micron. Overall, precision errors are about 0.025 in radiance factor and absolute errors are at the 10-15% level. These results are consistent with previous studies that found radiance factors of 0.35 in Tharsis, 0.47 in Elysium, and 0.26 in dark regions at 2.25\\micron\\ [7,8] and 0.3 in bright regions and 0.1 in dark regions at 3.5\\micron\\ [8]. These absolute flux values will allow direct comparison of these results to radiative transfer models of the behavior of the surface and

  18. Time-average-based Methods for Multi-angular Scale Analysis of Cosmic-Ray Data

    NASA Astrophysics Data System (ADS)

    Iuppa, R.; Di Sciascio, G.

    2013-04-01

    Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10°, disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

  19. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  20. CONTROLLING ABSOLUTE FREQUENCY OF FEEDBACK IN A SELF-CONTROLLED SITUATION ENHANCES MOTOR LEARNING.

    PubMed

    Tsai, Min-Jen; Jwo, Hank

    2015-12-01

    The guidance hypothesis suggested that excessive extrinsic feedback facilitates motor performance but blocks the processing of intrinsic information. The present study tested the tenet of guidance hypothesis in self-controlled feedback by controlling the feedback frequency. The motor learning effect of limiting absolute feedback frequency was examined. Thirty-six participants (25 men, 11 women; M age=25.1 yr., SD=2.2) practiced a hand-grip force control task on a dynamometer by the non-dominant hand with varying amounts of feedback. They were randomly assigned to: (a) Self-controlled, (b) Yoked with self-controlled, and (c) Limited self-controlled conditions. In acquisition, two-way analysis of variance indicated significantly lower absolute error in both the yoked and limited self-controlled groups than the self-controlled group. The effect size of absolute error between trials with feedback and without feedback in the limited self-controlled condition was larger than that of the self-controlled condition. In the retention and transfer tests, the Limited self-controlled feedback group had significantly lower absolute error than the other two groups. The results indicated an increased motor learning effect of limiting absolute frequency of feedback in the self-controlled condition.

  1. Set standard deviation, repeatability and offset of absolute gravimeter A10-008

    USGS Publications Warehouse

    Schmerge, D.; Francis, O.

    2006-01-01

    The set standard deviation, repeatability and offset of absolute gravimeter A10-008 were assessed at the Walferdange Underground Laboratory for Geodynamics (WULG) in Luxembourg. Analysis of the data indicates that the instrument performed within the specifications of the manufacturer. For A10-008, the average set standard deviation was (1.6 0.6) ??Gal (1Gal ??? 1 cm s -2), the average repeatability was (2.9 1.5) ??Gal, and the average offset compared to absolute gravimeter FG5-216 was (3.2 3.5) ??Gal. ?? 2006 BIPM and IOP Publishing Ltd.

  2. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids.

    PubMed

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005

  3. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids.

    PubMed

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings.

  4. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids

    PubMed Central

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm2 area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10-1 to 4 × 10-3 copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005

  5. Absolute V-R colors of trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Alvarez-Candal, Alvaro; Ayala-Loera, Carmen; Ortiz, Jose-Luis; Duffard, Rene; Estela, Fernandez-Valenzuela; Santos-Sanz, Pablo

    2016-10-01

    The absolute magnitude of a minor body is the apparent magnitude that the body would have if observed from the Sun at a distance of 1AU. Absolute magnitudes are measured using phase curves, showing the change of the magnitude, normalized to unit helio and geo-centric distance, vs. phase angle. The absolute magnitude is then the Y-intercept of the curve. Absolute magnitudes are related to the total reflecting surface of the body and thus bring information of its size, coupled with the reflecting properties.Since 2011 our team has been collecting data from several telescopes spread in Europe and South America. We complemented our data with those available in the literature in order to construct phase curves of trans-Neptunian objects with at least three points. In a first release (Alvarez-Candal et al. 2016, A&A, 586, A155) we showed results for 110 trans-Neptunian objects using V magnitudes only, assuming an overall linear trend and taking into consideration rotational effects, for objects with known light-curves.In this contribution we show results for more than 130 objects, about 100 of them with phase curves in two filters: V and R. We compute absolute magnitudes and phase coefficients in both filters, when available. The average values are HV = 6.39 ± 2.37, βV = (0.09 ± 0.32) mag per degree, HR = 5.38 ± 2.30, and βR = (0.08 ± 0.42) mag per degree.

  6. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  7. Measurement error analysis of taxi meter

    NASA Astrophysics Data System (ADS)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  8. Measurement of absolute optical thickness of mask glass by wavelength-tuning Fourier analysis.

    PubMed

    Kim, Yangjin; Hbino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-07-01

    Optical thickness is a fundamental characteristic of an optical component. A measurement method combining discrete Fourier-transform (DFT) analysis and a phase-shifting technique gives an appropriate value for the absolute optical thickness of a transparent plate. However, there is a systematic error caused by the nonlinearity of the phase-shifting technique. In this research the absolute optical-thickness distribution of mask blank glass was measured using DFT and wavelength-tuning Fizeau interferometry without using sensitive phase-shifting techniques. The error occurring during the DFT analysis was compensated for by using the unwrapping correlation. The experimental results indicated that the absolute optical thickness of mask glass was measured with an accuracy of 5 nm.

  9. Absolute Radiometric Calibration of KOMPSAT-3A

    NASA Astrophysics Data System (ADS)

    Ahn, H. Y.; Shin, D. Y.; Kim, J. S.; Seo, D. C.; Choi, C. U.

    2016-06-01

    This paper presents a vicarious radiometric calibration of the Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) performed by the Korea Aerospace Research Institute (KARI) and the Pukyong National University Remote Sensing Group (PKNU RSG) in 2015.The primary stages of this study are summarized as follows: (1) A field campaign to determine radiometric calibrated target fields was undertaken in Mongolia and South Korea. Surface reflectance data obtained in the campaign were input to a radiative transfer code that predicted at-sensor radiance. Through this process, equations and parameters were derived for the KOMPSAT-3A sensor to enable the conversion of calibrated DN to physical units, such as at-sensor radiance or TOA reflectance. (2) To validate the absolute calibration coefficients for the KOMPSAT-3A sensor, we performed a radiometric validation with a comparison of KOMPSAT-3A and Landsat-8 TOA reflectance using one of the six PICS (Libya 4). Correlations between top-of-atmosphere (TOA) radiances and the spectral band responses of the KOMPSAT-3A sensors at the Zuunmod, Mongolia and Goheung, South Korea sites were significant for multispectral bands. The average difference in TOA reflectance between KOMPSAT-3A and Landsat-8 image over the Libya 4, Libya site in the red-green-blue (RGB) region was under 3%, whereas in the NIR band, the TOA reflectance of KOMPSAT-3A was lower than the that of Landsat-8 due to the difference in the band passes of two sensors. The KOMPSAT-3Aensor includes a band pass near 940 nm that can be strongly absorbed by water vapor and therefore displayed low reflectance. Toovercome this, we need to undertake a detailed analysis using rescale methods, such as the spectral bandwidth adjustment factor.

  10. On the absolute calibration of SO2 cameras

    NASA Astrophysics Data System (ADS)

    Lübcke, P.; Bobrowski, N.; Illing, S.; Kern, C.; Alvarez Nieves, J. M.; Vogel, L.; Zielcke, J.; Delgado Granados, H.; Platt, U.

    2012-09-01

    results are compared with measurements from an IDOAS to verify the calibration curve over the spatial extend of the image. Our results show that calibration cells can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. These effects can lead to an even more significant overestimation or, depending on the measurement conditions, an underestimation of the true CD. Previous investigations found that possible errors can be more than an order of magnitude. However, the spectral information from the DOAS measurements allows to correct for these radiative transfer effects. The measurement presented in this work were taken at Popocatépetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 kg s-1 and 14.34 kg s-1 were observed.

  11. Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.

    PubMed

    Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong

    2016-03-20

    A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing.

  12. Absolute Quantification of Rifampicin by MALDI Imaging Mass Spectrometry Using Multiple TOF/TOF Events in a Single Laser Shot

    NASA Astrophysics Data System (ADS)

    Prentice, Boone M.; Chumbley, Chad W.; Caprioli, Richard M.

    2016-09-01

    Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) allows for the visualization of molecular distributions within tissue sections. While providing excellent molecular specificity and spatial information, absolute quantification by MALDI IMS remains challenging. Especially in the low molecular weight region of the spectrum, analysis is complicated by matrix interferences and ionization suppression. Though tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity and improve sensitivity by eliminating chemical noise, typical MALDI MS/MS modalities only scan for a single MS/MS event per laser shot. Herein, we describe TOF/TOF instrumentation that enables multiple fragmentation events to be performed in a single laser shot, allowing the intensity of the analyte to be referenced to the intensity of the internal standard in each laser shot while maintaining the benefits of MS/MS. This approach is illustrated by the quantitative analyses of rifampicin (RIF), an antibiotic used to treat tuberculosis, in pooled human plasma using rifapentine (RPT) as an internal standard. The results show greater than 4-fold improvements in relative standard deviation as well as improved coefficients of determination (R2) and accuracy (>93% quality controls, <9% relative errors). This technology is used as an imaging modality to measure absolute RIF concentrations in liver tissue from an animal dosed in vivo. Each microspot in the quantitative image measures the local RIF concentration in the tissue section, providing absolute pixel-to-pixel quantification from different tissue microenvironments. The average concentration determined by IMS is in agreement with the concentration determined by HPLC-MS/MS, showing a percent difference of 10.6%.

  13. The Reproducibility and Absolute Values of Echocardiographic Measurements of Left Ventricular Size and Function in Children are Algorithm Dependent

    PubMed Central

    Margossian, Renee; Chen, Shan; Sleeper, Lynn A.; Tani, Lloyd Y.; Shirali, Girish; Golding, Fraser; Tierney, Elif Seda Selamet; Altmann, Karen; Campbell, Michael J.; Szwast, Anita; Sharkey, Angela; Radojewski, Elizabeth; Colan, Steven D.

    2015-01-01

    Background Several quantification algorithms for measuring left ventricular (LV) size and function are used in clinical and research settings. We investigated the effect of the measurement algorithm and beat averaging on the reproducibility of measurements of the LV and assessed the magnitude of agreement among the algorithms in children with dilated cardiomyopathy (DCM). Methods Echocardiograms were obtained on 169 children from 8 clinical centers. Inter- and intra-reader reproducibility were assessed on measurements of LV volumes using biplane Simpson, modified Simpson (MS), and 5/6 x area x length (5/6AL) algorithms. Percent error (%error) was calculated as the inter- or intra-reader difference / mean x 100. Single beat measurements and the 3-beat average (3BA) were compared. Intra-class correlation coefficients (ICC) were calculated to assess agreement. Results Single beat inter-reader reproducibility was lowest (%error was highest) using biplane Simpson; 5/6AL and MS were similar but significantly better than biplane Simpson (p<.05). Single beat intra-reader reproducibility was highest using 5/6AL (p<.05). 3BA improved reproducibility for almost all measures (p<.05). Reproducibility in both single and 3BA values fell with greater LV dilation and systolic dysfunction (p<.05). ICCs were > 0.95 across measures, although absolute volume and mass values were systematically lower for biplane Simpson compared to MS and to 5/6AL. Conclusions The reproducibility of LV size and function measurements in children with DCM is highest using the 5/6AL algorithm, and can be further improved by using 3BA. However, values derived from different algorithms are not interchangeable. PMID:25728351

  14. Absolute magnitudes of trans-neptunian objects

    NASA Astrophysics Data System (ADS)

    Duffard, R.; Alvarez-candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Morales, N.; Santos-Sanz, P.; Thirouin, A.

    2015-10-01

    Accurate measurements of diameters of trans- Neptunian objects are extremely complicated to obtain. Radiomatric techniques applied to thermal measurements can provide good results, but precise absolute magnitudes are needed to constrain diameters and albedos. Our objective is to measure accurate absolute magnitudes for a sample of trans- Neptunian objects, many of which have been observed, and modelled, by the "TNOs are cool" team, one of Herschel Space Observatory key projects grantes with ~ 400 hours of observing time. We observed 56 objects in filters V and R, if possible. These data, along with data available in the literature, was used to obtain phase curves and to measure absolute magnitudes by assuming a linear trend of the phase curves and considering magnitude variability due to rotational light-curve. In total we obtained 234 new magnitudes for the 56 objects, 6 of them with no reported previous measurements. Including the data from the literature we report a total of 109 absolute magnitudes.

  15. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  16. The Simplicity Argument and Absolute Morality

    ERIC Educational Resources Information Center

    Mijuskovic, Ben

    1975-01-01

    In this paper the author has maintained that there is a similarity of thought to be found in the writings of Cudworth, Emerson, and Husserl in his investigation of an absolute system of morality. (Author/RK)

  17. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  18. Precision Absolute Beam Current Measurement of Low Power Electron Beam

    SciTech Connect

    Ali, M. M.; Bevins, M. E.; Degtiarenko, P.; Freyberger, A.; Krafft, G. A.

    2012-11-01

    Precise measurements of low power CW electron beam current for the Jefferson Lab Nuclear Physics program have been performed using a Tungsten calorimeter. This paper describes the rationale for the choice of the calorimeter technique, as well as the design and calibration of the device. The calorimeter is in use presently to provide a 1% absolute current measurement of CW electron beam with 50 to 500 nA of average beam current and 1-3 GeV beam energy. Results from these recent measurements will also be presented.

  19. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.

    2013-08-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable

  20. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  1. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  2. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.; ,

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  3. Landsat-7 ETM+ radiometric stability and absolute calibration

    NASA Astrophysics Data System (ADS)

    Markham, Brian L.; Barker, John L.; Barsi, Julia A.; Kaita, Ed; Thome, Kurtis J.; Helder, Dennis L.; Palluconi, Frank D.; Schott, John R.; Scaramuzza, Pat

    2003-04-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than +/-1%, reflective band absolute calibration to better than +/-5%, and thermal band absolute calibration to better than +/- 0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of +/- 0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  4. Correction due to the finite speed of light in absolute gravimeters Correction due to the finite speed of light in absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Nagornyi, V. D.; Zanimonskiy, Y. M.; Zanimonskiy, Y. Y.

    2011-06-01

    Equations (45) and (47) in our paper [1] in this issue have incorrect sign and should read \\tilde T_i=T_i+{b\\mp S_i\\over c},\\cr\\tilde T_i=T_i\\mp {S_i\\over c}. The error traces back to our formula (3), inherited from the paper [2]. According to the technical documentation [3, 4], the formula (3) is implemented by several commercially available instruments. An incorrect sign would cause a bias of about 20 µGal not known for these instruments, which probably indicates that the documentation incorrectly reflects the implemented measurement equation. Our attention to the error was drawn by the paper [5], also in this issue, where the sign is mentioned correctly. References [1] Nagornyi V D, Zanimonskiy Y M and Zanimonskiy Y Y 2011 Correction due to the finite speed of light in absolute gravimeters Metrologia 48 101-13 [2] Niebauer T M, Sasagawa G S, Faller J E, Hilt R and Klopping F 1995 A new generation of absolute gravimeters Metrologia 32 159-80 [3] Micro-g LaCoste, Inc. 2006 FG5 Absolute Gravimeter Users Manual [4] Micro-g LaCoste, Inc. 2007 g7 Users Manual [5] Niebauer T M, Billson R, Ellis B, Mason B, van Westrum D and Klopping F 2011 Simultaneous gravity and gradient measurements from a recoil-compensated absolute gravimeter Metrologia 48 154-63

  5. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  6. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  7. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  8. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed.

  9. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  10. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  11. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  12. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  13. [Paradigm errors in the old biomedical science].

    PubMed

    Skurvydas, Albertas

    2008-01-01

    The aim of this article was to review the basic drawbacks of the deterministic and reductionistic thinking in biomedical science and to provide ways for dealing with them. The present paradigm of research in biomedical science has not got rid of the errors of the old science yet, i.e. the errors of absolute determinism and reductionism. These errors restrict the view and thinking of scholars engaged in the studies of complex and dynamic phenomena and mechanisms. Recently, discussions on science paradigm aimed at spreading the new science paradigm that of complex dynamic systems as well as chaos theory are in progress all over the world. It is for the nearest future to show which of the two, the old or the new science, will be the winner. We have come to the main conclusion that deterministic and reductionistic thinking applied in improper way can cause substantial damage rather than prove benefits for biomedicine science. PMID:18541951

  14. Measurement error in human dental mensuration.

    PubMed

    Kieser, J A; Groeneveld, H T; McKee, J; Cameron, N

    1990-01-01

    The reliability of human odontometric data was evaluated in a sample of 60 teeth. Three observers, using their own instruments and the same definition of the mesiodistal and buccolingual dimensions were asked to repeat their measurements after 2 months. Precision, or repeatability, was analysed by means of Pearsonian correlation coefficients and mean absolute error values. Accuracy, or the absence of bias, was evaluated by means of Bland-Altman procedures and attendant Student t-tests, and also by an ANOVA procedure. The present investigation suggests that odontometric data have a high interobserver error component. Mesiodistal dimensions show greater imprecision and bias than buccolingual measurements. The results of the ANOVA suggest that bias is the result of interobserver error and is not due to the time between repeated measurements.

  15. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  16. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  17. Measurement of the absolute differential cross section for np elastic scattering at 194 MeV

    SciTech Connect

    Sarsour, M.; Peterson, T.; Planinic, M.; Vigdor, S. E.; Allgower, C.; Hossbach, T.; Jacobs, W. W.; Klyachko, A. V.; Rinckel, T.; Stephenson, E. J.; Wissink, S. W.; Zhou, Y.; Bergenwall, B.; Blomgren, J.; Johansson, C.; Klug, J.; Nadel-Turonski, P.; Nilsson, L.; Olsson, N.; Pomp, S.

    2006-10-15

    A tagged medium-energy neutron beam was used in a precise measurement of the absolute differential cross section for np backscattering. The results resolve significant discrepancies within the np database concerning the angular dependence in this regime. The experiment has determined the absolute normalization with {+-}1.5% uncertainty, suitable to verify constraints of supposedly comparable precision that arise from the rest of the database in partial wave analyses. The analysis procedures, especially those associated with the evaluation of systematic errors in the experiment, are described in detail so that systematic uncertainties may be included in a reasonable way in subsequent partial wave analysis fits incorporating the present results.

  18. Morphology and Absolute Magnitudes of the SDSS DR7 QSOs

    NASA Astrophysics Data System (ADS)

    Coelho, B.; Andrei, A. H.; Antón, S.

    2014-10-01

    The ESA mission Gaia will furnish a complete census of the Milky Way, delivering astrometrics, dynamics, and astrophysics information for 1 billion stars. Operating in all-sky repeated survey mode, Gaia will also provide measurements of extra-galactic objects. Among the later there will be at least 500,000 QSOs that will be used to build the reference frame upon which the several independent observations will be combined and interpreted. Not all the QSOs are equally suited to fulfill this role of fundamental, fiducial grid-points. Brightness, morphology, and variability define the astrometric error budget for each object. We made use of 3 morphological parameters based on the PSF sharpness, circularity and gaussianity, which enable us to distinguish the "real point-like" QSOs. These parameters are being explored on the spectroscopically certified QSOs of the SDSS DR7, to compare the performance against other morphology classification schemes, as well as to derive properties of the host galaxy. We present a new method, based on the Gaia quasar database, to derive absolute magnitudes, on the SDSS filters domain. The method can be extrapolated all over the optical window, including the Gaia filters. We discuss colors derived from SDSS apparent magnitudes and colors based on absolute magnitudes that we obtained tanking into account corrections for dust extinction, either intergalactic or from the QSO host, and for the Lyman α forest. In the future we want to further discuss properties of the host galaxies, comparing for e.g. the obtained morphological classification with the color, the apparent and absolute magnitudes, and the redshift distributions.

  19. Absolute positioning using DORIS tracking of the SPOT-2 satellite

    NASA Technical Reports Server (NTRS)

    Watkins, M. M.; Ries, J. C.; Davis, G. W.

    1992-01-01

    The ability of the French DORIS system operating on the SPOT-2 satellite to provide absolute site positioning at the 20-30-centimeter level using 80 d of data is demonstrated. The accuracy of the vertical component is comparable to that of the horizontal components, indicating that residual troposphere error is not a limiting factor. The translation parameters indicate that the DORIS network realizes a geocentric frame to about 50 nm in each component. The considerable amount of data provided by the nearly global, all-weather DORIS network allowed this complex parameterization required to reduce the unmodeled forces acting on the low-earth satellite. Site velocities with accuracies better than 10 mm/yr should certainly be possible using the multiyear span of the SPOT series and Topex/Poseidon missions.

  20. Full field imaging based instantaneous hyperspectral absolute refractive index measurement

    SciTech Connect

    Baba, Justin S; Boudreaux, Philip R

    2012-01-01

    Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.

  1. Quantum theory allows for absolute maximal contextuality

    NASA Astrophysics Data System (ADS)

    Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán

    2015-12-01

    Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.

  2. Absolute calibration in vivo measurement systems

    SciTech Connect

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs.

  3. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record

  4. Quantitative standards for absolute linguistic universals.

    PubMed

    Piantadosi, Steven T; Gibson, Edward

    2014-01-01

    Absolute linguistic universals are often justified by cross-linguistic analysis: If all observed languages exhibit a property, the property is taken to be a likely universal, perhaps specified in the cognitive or linguistic systems of language learners and users. In many cases, these patterns are then taken to motivate linguistic theory. Here, we show that cross-linguistic analysis will very rarely be able to statistically justify absolute, inviolable patterns in language. We formalize two statistical methods--frequentist and Bayesian--and show that in both it is possible to find strict linguistic universals, but that the numbers of independent languages necessary to do so is generally unachievable. This suggests that methods other than typological statistics are necessary to establish absolute properties of human language, and thus that many of the purported universals in linguistics have not received sufficient empirical justification.

  5. Absolute photoacoustic thermometry in deep tissue.

    PubMed

    Yao, Junjie; Ke, Haixin; Tai, Stephen; Zhou, Yong; Wang, Lihong V

    2013-12-15

    Photoacoustic thermography is a promising tool for temperature measurement in deep tissue. Here we propose an absolute temperature measurement method based on the dual temperature dependences of the Grüneisen parameter and the speed of sound in tissue. By taking ratiometric measurements at two adjacent temperatures, we can eliminate the factors that are temperature irrelevant but difficult to correct for in deep tissue. To validate our method, absolute temperatures of blood-filled tubes embedded ~9 mm deep in chicken tissue were measured in a biologically relevant range from 28°C to 46°C. The temperature measurement accuracy was ~0.6°C. The results suggest that our method can be potentially used for absolute temperature monitoring in deep tissue during thermotherapy.

  6. Molecular iodine absolute frequencies. Final report

    SciTech Connect

    Sansonetti, C.J.

    1990-06-25

    Fifty specified lines of {sup 127}I{sub 2} were studied by Doppler-free frequency modulation spectroscopy. For each line the classification of the molecular transition was determined, hyperfine components were identified, and one well-resolved component was selected for precise determination of its absolute frequency. In 3 cases, a nearby alternate line was selected for measurement because no well-resolved component was found for the specified line. Absolute frequency determinations were made with an estimated uncertainty of 1.1 MHz by locking a dye laser to the selected hyperfine component and measuring its wave number with a high-precision Fabry-Perot wavemeter. For each line results of the absolute measurement, the line classification, and a Doppler-free spectrum are given.

  7. Forecasts of time averages with a numerical weather prediction model

    NASA Technical Reports Server (NTRS)

    Roads, J. O.

    1986-01-01

    Forecasts of time averages of 1-10 days in duration by an operational numerical weather prediction model are documented for the global 500 mb height field in spectral space. Error growth in very idealized models is described in order to anticipate various features of these forecasts and in order to anticipate what the results might be if forecasts longer than 10 days were carried out by present day numerical weather prediction models. The data set for this study is described, and the equilibrium spectra and error spectra are documented; then, the total error is documented. It is shown how forecasts can immediately be improved by removing the systematic error, by using statistical filters, and by ignoring forecasts beyond about a week. Temporal variations in the error field are also documented.

  8. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  9. Absolute Stability And Hyperstability In Hilbert Space

    NASA Technical Reports Server (NTRS)

    Wen, John Ting-Yung

    1989-01-01

    Theorems on stabilities of feedback control systems proved. Paper presents recent developments regarding theorems of absolute stability and hyperstability of feedforward-and-feedback control system. Theorems applied in analysis of nonlinear, adaptive, and robust control. Extended to provide sufficient conditions for stability in system including nonlinear feedback subsystem and linear time-invariant (LTI) feedforward subsystem, state space of which is Hilbert space, and input and output spaces having finite numbers of dimensions. (In case of absolute stability, feedback subsystem memoryless and possibly time varying. For hyperstability, feedback system dynamical system.)

  10. Variable selection for modeling the absolute magnitude at maximum of Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Uemura, Makoto; Kawabata, Koji S.; Ikeda, Shiro; Maeda, Keiichi

    2015-06-01

    We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and a LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates for the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernovae: (i) The absolute magnitude at maximum depends on the color and light-curve width. (ii) The light-curve width depends on the strength of Si II. Recent studies have suggested adding more variables in order to explain the absolute magnitude. However, our analysis does not support adding any other variables in order to have a better generalization error.

  11. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  12. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  13. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  14. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  15. A method to evaluate dose errors introduced by dose mapping processes for mass conserving deformations

    PubMed Central

    Yan, C.; Hugo, G.; Salguero, F. J.; Saleh-Sayah, N.; Weiss, E.; Sleeman, W. C.; Siebers, J. V.

    2012-01-01

    Purpose: To present a method to evaluate the dose mapping error introduced by the dose mapping process. In addition, apply the method to evaluate the dose mapping error introduced by the 4D dose calculation process implemented in a research version of commercial treatment planning system for a patient case. Methods: The average dose accumulated in a finite volume should be unchanged when the dose delivered to one anatomic instance of that volume is mapped to a different anatomic instance—provided that the tissue deformation between the anatomic instances is mass conserving. The average dose to a finite volume on image S is defined as dS¯=es/mS, where eS is the energy deposited in the mass mS contained in the volume. Since mass and energy should be conserved, when dS¯ is mapped to an image R(dS→R¯=dR¯), the mean dose mapping error is defined as Δdm¯=|dR¯-dS¯|=|eR/mR-eS/mS|, where the eR and eS are integral doses (energy deposited), and mR and mS are the masses within the region of interest (ROI) on image R and the corresponding ROI on image S, where R and S are the two anatomic instances from the same patient. Alternatively, application of simple differential propagation yields the differential dose mapping error, Δdd¯=|∂d¯∂e*Δe+∂d¯∂m*Δm|=|(eS-eR)mR-(mS-mR)mR2*eR|=α|dR¯-dS¯| with α=mS/mR. A 4D treatment plan on a ten-phase 4D-CT lung patient is used to demonstrate the dose mapping error evaluations for a patient case, in which the accumulated dose, DR¯=∑S=09dS→R¯, and associated error values (ΔDm¯ and ΔDd¯) are calculated for a uniformly spaced set of ROIs. Results: For the single sample patient dose distribution, the average accumulated differential dose mapping error is 4.3%, the average absolute differential dose mapping error is 10.8%, and the average accumulated mean dose mapping error is 5.0%. Accumulated differential dose mapping errors within the gross tumor volume (GTV) and planning target volume (PTV) are lower, 0

  16. [Comparison on the methods for spatial interpolation of the annual average precipitation in the Loess Plateau region].

    PubMed

    Yu, Yang; Wei, Wei; Chen, Li-ding; Yang, Lei; Zhang, Han-dan

    2015-04-01

    Based on 57 years (1957-2013) daily precipitation datasets of the 85 meteorological stations in the Loess Plateau region, different spatial interpolation methods, including ordinary kriging (OK), inverse distance weighting (IDW) and radial-based function (RBF), were conducted to analyze the spatial variation of annual average precipitation regionally. Meanwhile, the mean absolute error (MAE), the root mean square error (RMSE), the accuracy (AC) and the Pearson correlation coefficient (R) were compared among the interpolation results in order to quantify the effects of different interpolation methods on spatial variation of the annual average precipitation. The results showed that the Moran's I index was 0.67 for the 57 years annual average precipitation in the Loess Plateau region. Meteorological stations exhibited strong spatial correlation. The validation results of the 63 training stations and 22 test stations indicated that there were significant correlations between the training and test values among different interpolation methods. However, the RMSE (IDW = 51.49, RBF = 43.79) and MAE (IDW = 38.98, RBF = 34.61) of the IDW and the RBF showed higher values than the OK. In addition, the comparison of the four semi-variagram models (Circular, Spherical, Exponential and Gaussian) for the OK indicated that the circular model had the lowest MAE (32.34) and the highest accuracy (0.976), while the MAE of the exponential model was the highest (33.24). In conclusion, comparing the validation between the training data and test results of the different spatial interpolation methods, the circular model of the OK method was the best one for obtaining accurate spatial interpolation of annual average precipitation in the Loess Plateau region.

  17. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  18. Absolute Points for Multiple Assignment Problems

    ERIC Educational Resources Information Center

    Adlakha, V.; Kowalski, K.

    2006-01-01

    An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…

  19. Absolute partial photoionization cross sections of ozone.

    SciTech Connect

    Berkowitz, J.; Chemistry

    2008-04-01

    Despite the current concerns about ozone, absolute partial photoionization cross sections for this molecule in the vacuum ultraviolet (valence) region have been unavailable. By eclectic re-evaluation of old/new data and plausible assumptions, such cross sections have been assembled to fill this void.

  20. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  1. Teaching Absolute Value Inequalities to Mature Students

    ERIC Educational Resources Information Center

    Sierpinska, Anna; Bobos, Georgeana; Pruncut, Andreea

    2011-01-01

    This paper gives an account of a teaching experiment on absolute value inequalities, whose aim was to identify characteristics of an approach that would realize the potential of the topic to develop theoretical thinking in students enrolled in prerequisite mathematics courses at a large, urban North American university. The potential is…

  2. Solving Absolute Value Equations Algebraically and Geometrically

    ERIC Educational Resources Information Center

    Shiyuan, Wei

    2005-01-01

    The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the absolute value equation presented, for an interesting way to form an overall understanding of the concept.

  3. Increasing Capacity: Practice Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Dodds, Pennie; Donkin, Christopher; Brown, Scott D.; Heathcote, Andrew

    2011-01-01

    In most of the long history of the study of absolute identification--since Miller's (1956) seminal article--a severe limit on performance has been observed, and this limit has resisted improvement even by extensive practice. In a startling result, Rouder, Morey, Cowan, and Pfaltz (2004) found substantially improved performance with practice in the…

  4. Absolute Radiometric Calibration Of The Thematic Mapper

    NASA Astrophysics Data System (ADS)

    Slater, P. N.; Biggar, S. F.; Holm, R. G.; Jackson, R. D.; Mao, Y.; Moran, M. S.; Palmer, J. M.; Yuan, B.

    1986-11-01

    The results are presented of five in-flight absolute radiometric calibrations, made in the period July 1984 to November 1985, at White Sands, New Mexico, of the solar reflective bands of the Landsat-5 Thematic Mapper (TM) . The 23 bandcalibrations made on the five dates show a ± 2.8% RMS variation from the mean as a percentage of the mean.

  5. On Relative and Absolute Conviction in Mathematics

    ERIC Educational Resources Information Center

    Weber, Keith; Mejia-Ramos, Juan Pablo

    2015-01-01

    Conviction is a central construct in mathematics education research on justification and proof. In this paper, we claim that it is important to distinguish between absolute conviction and relative conviction. We argue that researchers in mathematics education frequently have not done so and this has lead to researchers making unwarranted claims…

  6. Modified McLeod pressure gage eliminates measurement errors

    NASA Technical Reports Server (NTRS)

    Kells, M. C.

    1966-01-01

    Modification of a McLeod gage eliminates errors in measuring absolute pressure of gases in the vacuum range. A valve which is internal to the gage and is magnetically actuated is positioned between the mercury reservoir and the sample gas chamber.

  7. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  8. An Inequality between the Weighted Average and the Rowwise Correlation Coefficient for Proximity Matrices.

    ERIC Educational Resources Information Center

    Krijnen, Wim P.

    1994-01-01

    To assess association between rows of proximity matrices, H. de Vries (1993) introduces weighted average and row-wise average variants for Pearson's product-moment correlation, Spearman's rank correlation, and Kendall's rank correlation. For all three, the absolute value of the first variant is greater than or equal to the second. (SLD)

  9. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  10. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  11. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  12. TU-A-12A-09: Absolute Blood Flow Measurement in a Cardiac Phantom Using Low Dose CT

    SciTech Connect

    Ziemer, B; Hubbard, L; Lipinski, J; Molloi, S

    2014-06-15

    Purpose: To investigate a first pass analysis technique to measure absolute flow from low dose CT images in a cardiac phantom. This technique can be combined with a myocardial mass assignment to yield absolute perfusion using only two volume scans and reduce the radiation dose to the patient. Methods: A four-chamber cardiac phantom and perfusion chamber were constructed from poly-acrylic and connected with tubing to approximate anatomical features. The system was connected to a pulsatile pump, input/output reservoirs and power contrast injector. Flow was varied in the range of 1-2.67 mL/s with the pump operating at 60 beats/min. The system was imaged once a second for 14 seconds with a 320-row scanner (Toshiba Medical Systems) using a contrast-enhanced, prospective-gated cardiac perfusion protocol. Flow was calculated by the following steps: subsequent images of the perfusion volume were subtracted to find the contrast entering the volume; this was normalized by an upstream, known volume region to convert Hounsfield (HU) values to concentration; this was divided by the subtracted images time difference. The technique requires a relatively stable input contrast concentration and no contrast can leave the perfusion volume before the flow measurement is completed. Results: The flow calculated from the images showed an excellent correlation with the known rates. The data was fit to a linear function with slope 1.03, intercept 0.02 and an R{sup 2} value of 0.99. The average root mean square (RMS) error was 0.15 mL/s and the average standard deviation was 0.14 mL/s. The flow rate was stable within 7.7% across the full scan and served to validate model assumptions. Conclusion: Accurate, absolute flow rates were measured from CT images using a conservation of mass model. Measurements can be made using two volume scans which can substantially reduce the radiation dose compared with current dynamic perfusion techniques.

  13. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA) and Generalized Regression Neural Network (GRNN) in Forecasting Hepatitis Incidence in Heng County, China

    PubMed Central

    Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui

    2016-01-01

    Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555

  14. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  15. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  16. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  17. TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA

    SciTech Connect

    Iuppa, R.; Di Sciascio, G. E-mail: giuseppe.disciascio@roma2.infn.it

    2013-04-01

    Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

  18. Application of an autoregressive integrated moving average model for predicting injury mortality in Xiamen, China

    PubMed Central

    Lin, Yilan; Chen, Min; Chen, Guowei; Wu, Xiaoqing; Lin, Tianquan

    2015-01-01

    Objective Injury is currently an increasing public health problem in China. Reducing the loss due to injuries has become a main priority of public health policies. Early warning of injury mortality based on surveillance information is essential for reducing or controlling the disease burden of injuries. We conducted this study to find the possibility of applying autoregressive integrated moving average (ARIMA) models to predict mortality from injuries in Xiamen. Method The monthly mortality data on injuries in Xiamen (1 January 2002 to 31 December 2013) were used to fit the ARIMA model with the conditional least-squares method. The values p, q and d in the ARIMA (p, d, q) model refer to the numbers of autoregressive lags, moving average lags and differences, respectively. The Ljung–Box test was used to measure the ‘white noise’ and residuals. The mean absolute percentage error (MAPE) between observed and fitted values was used to evaluate the predicted accuracy of the constructed models. Results A total of 8274 injury-related deaths in Xiamen were identified during the study period; the average annual mortality rate was 40.99/100 000 persons. Three models, ARIMA (0, 1, 1), ARIMA (4, 1, 0) and ARIMA (1, 1, (2)), passed the parameter (p<0.01) and residual (p>0.05) tests, with MAPE 11.91%, 11.96% and 11.90%, respectively. We chose ARIMA (0, 1, 1) as the optimum model, the MAPE value for which was similar to that of other models but with the fewest parameters. According to the model, there would be 54 persons dying from injuries each month in Xiamen in 2014. Conclusion The ARIMA (0, 1, 1) model could be applied to predict mortality from injuries in Xiamen. PMID:26656013

  19. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  20. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  1. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  2. Absolute testing of flats in sub-stitching interferometer by rotation-shift method

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Li, Yun; Xing, Tingwen

    2015-09-01

    Most of the commercial available sub-aperture stitching interferometers measure the surface with a standard lens that produces a reference wavefront, and the precision of the interferometer is generally limited by the standard lens. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. When the testing accuracy (repeatability and reproducibility) is close to 1nm, in addition to the reference surface, other factors will also affect the measuring accuracy such as environment, zoom magnification, stitching precision, tooling and fixture, the characteristics of optical materials and so on. We establish a stitching system in the thousand level cleanroom. The stitching system is including the Zygo interferometer, the motion system with Bilz active isolation system at level VC-F. We review the traditional absolute flat testing methods and emphasize the method of rotation-shift functions. According to the rotation-shift method we get the profile of the reference lens and the testing lens. The problem of the rotation-shift method is the tilt error. In the motion system, we control the tilt error no more than 4 second to reduce the error. In order to obtain higher testing accuracy, we analyze the influence surface shape measurement accuracy by recording the environment error with the fluke testing equipment.

  3. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  4. An absolute measure for a key currency

    NASA Astrophysics Data System (ADS)

    Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito

    It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.

  5. Probing absolute spin polarization at the nanoscale.

    PubMed

    Eltschka, Matthias; Jäck, Berthold; Assig, Maximilian; Kondrashov, Oleg V; Skvortsov, Mikhail A; Etzkorn, Markus; Ast, Christian R; Kern, Klaus

    2014-12-10

    Probing absolute values of spin polarization at the nanoscale offers insight into the fundamental mechanisms of spin-dependent transport. Employing the Zeeman splitting in superconducting tips (Meservey-Tedrow-Fulde effect), we introduce a novel spin-polarized scanning tunneling microscopy that combines the probing capability of the absolute values of spin polarization with precise control at the atomic scale. We utilize our novel approach to measure the locally resolved spin polarization of magnetic Co nanoislands on Cu(111). We find that the spin polarization is enhanced by 65% when increasing the width of the tunnel barrier by only 2.3 Å due to the different decay of the electron orbitals into vacuum. PMID:25423049

  6. Absolute radiometry and the solar constant

    NASA Technical Reports Server (NTRS)

    Willson, R. C.

    1974-01-01

    A series of active cavity radiometers (ACRs) are described which have been developed as standard detectors for the accurate measurement of irradiance in absolute units. It is noted that the ACR is an electrical substitution calorimeter, is designed for automatic remote operation in any environment, and can make irradiance measurements in the range from low-level IR fluxes up to 30 solar constants with small absolute uncertainty. The instrument operates in a differential mode by chopping the radiant flux to be measured at a slow rate, and irradiance is determined from two electrical power measurements together with the instrumental constant. Results are reported for measurements of the solar constant with two types of ACRs. The more accurate measurement yielded a value of 136.6 plus or minus 0.7 mW/sq cm (1.958 plus or minus 0.010 cal/sq cm per min).

  7. From Hubble's NGSL to Absolute Fluxes

    NASA Technical Reports Server (NTRS)

    Heap, Sara R.; Lindler, Don

    2012-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.

  8. Impact of Winko on absolute discharges.

    PubMed

    Balachandra, Krishna; Swaminath, Sam; Litman, Larry C

    2004-01-01

    In Canada, case laws have had a significant impact on the way mentally ill offenders are managed, both in the criminal justice system and in the forensic mental health system. The Supreme Court of Canada's decision with respect to Winko has set a major precedent in the application of the test of significant risk to the safety of the public in making dispositions by the Ontario Review Board and granting absolute discharges to the mentally ill offenders in the forensic health system. Our study examines the impact of the Supreme Court of Canada's decision before and after Winko. The results show that the numbers of absolute discharges have increased post-Winko, which was statistically significant, but there could be other factors influencing this increase.

  9. Asteroid absolute magnitudes and slope parameters

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.

    1991-01-01

    A new listing of absolute magnitudes (H) and slope parameters (G) has been created and published in the Minor Planet Circulars; this same listing will appear in the 1992 Ephemerides of Minor Planets. Unlike previous listings, the values of the current list were derived from fits of data at the V band. All observations were reduced in the same fashion using, where appropriate, a single basis default value of 0.15 for the slope parameter. Distances and phase angles were computed for each observation. The data for 113 asteroids was of sufficiently high quality to permit derivation of their H and G. These improved absolute magnitudes and slope parameters will be used to deduce the most reliable bias-corrected asteroid size-frequency distribution yet made.

  10. Absolute-magnitude distributions of supernovae

    SciTech Connect

    Richardson, Dean; Wright, John; Jenkins III, Robert L.; Maddox, Larry

    2014-05-01

    The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.

  11. Absolute and relative dosimetry for ELIMED

    SciTech Connect

    Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F.; Carpinelli, M.; Presti, D. Lo; Raffaele, L.; Tramontana, A.; Cirio, R.; Sacchi, R.; Monaco, V.; Marchetto, F.; Giordanengo, S.

    2013-07-26

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  12. Absolute photoionization cross sections of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Samson, J. A. R.; Pareek, P. N.

    1985-01-01

    The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.

  13. Absolute photoionization cross sections of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Samson, J. A. R.; Pareek, P. N.

    1982-01-01

    The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.

  14. The absolute spectrophotometric catalog by Anita Cochran

    NASA Astrophysics Data System (ADS)

    Burnashev, V. I.; Burnasheva, B. A.; Ruban, E. V.; Hagen-Torn, E. I.

    2014-06-01

    The absolute spectrophotometric catalog by Anita Cochran is presented in a machine-readable form. The catalog systematizes observations acquired at the McDonald Observatory in 1977-1978. The data are compared with other sources, in particular, the calculated broadband stellar magnitudes are compared with photometric observations by other authors, to show that the observational data given in the catalog are reliable and suitable for a variety of applications. Observations of variable stars of different types make Cochran's catalog especially valuable.

  15. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  16. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  17. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  18. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  19. Absolute flux density calibrations: Receiver saturation effects

    NASA Technical Reports Server (NTRS)

    Freiley, A. J.; Ohlson, J. E.; Seidel, B. L.

    1978-01-01

    The effect of receiver saturation was examined for a total power radiometer which uses an ambient load for calibration. Extension to other calibration schemes is indicated. The analysis shows that a monotonic receiver saturation characteristic could cause either positive or negative measurement errors, with polarity depending upon operating conditions. A realistic model of the receiver was made by using a linear-cubic voltage transfer characteristic. The evaluation of measurement error for this model provided a means for correcting radio source measurements.

  20. Model of glucose sensor error components: identification and assessment for new Dexcom G4 generation devices.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Cobelli, Claudio

    2015-12-01

    It is clinically well-established that minimally invasive subcutaneous continuous glucose monitoring (CGM) sensors can significantly improve diabetes treatment. However, CGM readings are still not as reliable as those provided by standard fingerprick blood glucose (BG) meters. In addition to unavoidable random measurement noise, other components of sensor error are distortions due to the blood-to-interstitial glucose kinetics and systematic under-/overestimations associated with the sensor calibration process. A quantitative assessment of these components, and the ability to simulate them with precision, is of paramount importance in the design of CGM-based applications, e.g., the artificial pancreas (AP), and in their in silico testing. In the present paper, we identify and assess a model of sensor error of for two sensors, i.e., the G4 Platinum (G4P) and the advanced G4 for artificial pancreas studies (G4AP), both belonging to the recently presented "fourth" generation of Dexcom CGM sensors but different in their data processing. Results are also compared with those obtained by a sensor belonging to the previous, "third," generation by the same manufacturer, the SEVEN Plus (7P). For each sensor, the error model is derived from 12-h CGM recordings of two sensors used simultaneously and BG samples collected in parallel every 15 ± 5 min. Thanks to technological innovations, G4P outperforms 7P, with average mean absolute relative difference (MARD) of 11.1 versus 14.2%, respectively, and lowering of about 30% the error of each component. Thanks to the more sophisticated data processing algorithms, G4AP resulted more reliable than G4P, with a MARD of 10.0%, and a further decrease to 20% of the error due to blood-to-interstitial glucose kinetics.

  1. Chemical composition of French mimosa absolute oil.

    PubMed

    Perriot, Rodolphe; Breme, Katharina; Meierhenrich, Uwe J; Carenini, Elise; Ferrando, Georges; Baldovini, Nicolas

    2010-02-10

    Since decades mimosa (Acacia dealbata) absolute oil has been used in the flavor and perfume industry. Today, it finds an application in over 80 perfumes, and its worldwide industrial production is estimated five tons per year. Here we report on the chemical composition of French mimosa absolute oil. Straight-chain analogues from C6 to C26 with different functional groups (hydrocarbons, esters, aldehydes, diethyl acetals, alcohols, and ketones) were identified in the volatile fraction. Most of them are long-chain molecules: (Z)-heptadec-8-ene, heptadecane, nonadecane, and palmitic acid are the most abundant, and constituents such as 2-phenethyl alcohol, methyl anisate, and ethyl palmitate are present in smaller amounts. The heavier constituents were mainly triterpenoids such as lupenone and lupeol, which were identified as two of the main components. (Z)-Heptadec-8-ene, lupenone, and lupeol were quantified by GC-MS in SIM mode using external standards and represents 6%, 20%, and 7.8% (w/w) of the absolute oil. Moreover, odorant compounds were extracted by SPME and analyzed by GC-sniffing leading to the perception of 57 odorant zones, of which 37 compounds were identified by their odorant description, mass spectrum, retention index, and injection of the reference compound. PMID:20070087

  2. Measurement of absolute gravity acceleration in Firenze

    NASA Astrophysics Data System (ADS)

    de Angelis, M.; Greco, F.; Pistorio, A.; Poli, N.; Prevedelli, M.; Saccorotti, G.; Sorrentino, F.; Tino, G. M.

    2011-01-01

    This paper reports the results from the accurate measurement of the acceleration of gravity g taken at two separate premises in the Polo Scientifico of the University of Firenze (Italy). In these laboratories, two separate experiments aiming at measuring the Newtonian constant and testing the Newtonian law at short distances are in progress. Both experiments require an independent knowledge on the local value of g. The only available datum, pertaining to the italian zero-order gravity network, was taken more than 20 years ago at a distance of more than 60 km from the study site. Gravity measurements were conducted using an FG5 absolute gravimeter, and accompanied by seismic recordings for evaluating the noise condition at the site. The absolute accelerations of gravity at the two laboratories are (980 492 160.6 ± 4.0) μGal and (980 492 048.3 ± 3.0) μGal for the European Laboratory for Non-Linear Spectroscopy (LENS) and Dipartimento di Fisica e Astronomia, respectively. Other than for the two referenced experiments, the data here presented will serve as a benchmark for any future study requiring an accurate knowledge of the absolute value of the acceleration of gravity in the study region.

  3. A Methodology for Absolute Isotope Composition Measurement

    NASA Astrophysics Data System (ADS)

    Shen, J. J.; Lee, D.; Liang, W.

    2007-12-01

    Double spike technique was a well defined method for isotope composition measurement by TIMS of samples which have natural mass fractionation effect, but it is still a problem to define the isotope composition for double spike itself. In this study, we modified the old double spike technique and found that we could use the modified technique to solve the ¡§true¡¨ isotope composition of double spike itself. According the true isotope composition of double spike, we can measure the absolute isotope composition if the sample has natural fractionation effect. A new vector analytical method has been developed in order to obtain the true isotopic composition of a 42Ca-48Ca double spike, and this is achieved by using two different sample-spike mixtures combined with the double spike and the natural Ca data. Because the natural sample, the two mixtures, and the spike should all lie on a single mixing line, we are able to constrain the true isotopic composition of our double spike using this new approach. This method not only can be used in Ca system but also in Ti, Cr, Fe, Ni, Zn, Mo, Ba and Pb systems. The absolute double spike isotopic ratio is important, which can save a lot of time to check different reference standards. Especially for Pb, radiogenic isotope system, the decay systems embodied in three of four naturally occurring isotopes induce difficult to obtain true isotopic ratios for absolute dating.

  4. Chemical composition of French mimosa absolute oil.

    PubMed

    Perriot, Rodolphe; Breme, Katharina; Meierhenrich, Uwe J; Carenini, Elise; Ferrando, Georges; Baldovini, Nicolas

    2010-02-10

    Since decades mimosa (Acacia dealbata) absolute oil has been used in the flavor and perfume industry. Today, it finds an application in over 80 perfumes, and its worldwide industrial production is estimated five tons per year. Here we report on the chemical composition of French mimosa absolute oil. Straight-chain analogues from C6 to C26 with different functional groups (hydrocarbons, esters, aldehydes, diethyl acetals, alcohols, and ketones) were identified in the volatile fraction. Most of them are long-chain molecules: (Z)-heptadec-8-ene, heptadecane, nonadecane, and palmitic acid are the most abundant, and constituents such as 2-phenethyl alcohol, methyl anisate, and ethyl palmitate are present in smaller amounts. The heavier constituents were mainly triterpenoids such as lupenone and lupeol, which were identified as two of the main components. (Z)-Heptadec-8-ene, lupenone, and lupeol were quantified by GC-MS in SIM mode using external standards and represents 6%, 20%, and 7.8% (w/w) of the absolute oil. Moreover, odorant compounds were extracted by SPME and analyzed by GC-sniffing leading to the perception of 57 odorant zones, of which 37 compounds were identified by their odorant description, mass spectrum, retention index, and injection of the reference compound.

  5. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  6. Averaging of globally coupled oscillators

    NASA Astrophysics Data System (ADS)

    Swift, James W.; Strogatz, Steven H.; Wiesenfeld, Kurt

    1992-03-01

    We study a specific system of symmetrically coupled oscillators using the method of averaging. The equations describe a series array of Josephson junctions. We concentrate on the dynamics near the splay-phase state (also known as the antiphase state, ponies on a merry-go-round, or rotating wave). We calculate the Floquet exponents of the splay-phase periodic orbit in the weak-coupling limit, and find that all of the Floquet exponents are purely imaginary; in fact, all the Floquet exponents are zero except for a single complex conjugate pair. Thus, nested two-tori of doubly periodic solutions surround the splay-phase state in the linearized averaged equations. We numerically integrate the original system, and find startling agreement with the averaging results on two counts: The observed ratio of frequencies is very close to the prediction, and the solutions of the full equations appear to be either periodic or doubly periodic, as they are in the averaged equations. Such behavior is quite surprising from the point of view of generic dynamical systems theory-one expects higher-dimensional tori and chaotic solutions. We show that the functional form of the equations, and not just their symmetry, is responsible for this nongeneric behavior.

  7. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  8. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  9. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  10. Absolute parameters of stars in semidetached eclipsing binary systems

    NASA Astrophysics Data System (ADS)

    Budding, E.

    1985-06-01

    A number of questions concerning the absolute parameters of stars in semidetached binary systems are addressed. Consideration is given to: similarities between Algol-type binaries and unevolved detached binaries with respect to the mass-luminosity law; and the single-line classical Algol candidates with known mass functions and photometric solutions for mass ratio. It is shown that the validity of the mass luminosity-law cannot be verified for individual Algol-type binaries though it does hold well on average; and (2), the existence of a definite class of sd-binaries not containing a proportion of significantly undersize types is apparent. The conclusions are found to be in general agreement with the observations of Hall and Neff (1979).

  11. A new approach to high-order averaging

    NASA Astrophysics Data System (ADS)

    Chartier, P.; Murua, A.; Sanz-Serna, J. M.

    2012-09-01

    We present a new approach to perform high-order averaging in oscillatory periodic or quasi-periodic dynamical systems. The averaged system is expressed in terms of (i) scalar coefficients that are universal, i.e. independent of the system under consideration and (ii) basis functions that may be written in an explicit, systematic way in terms of the derivatives of the Fourier coefficients of the vector field being averaged. The coefficients may be recursively computed in a simple fashion. This approach may be used to obtain exponentially small error estimates, as those first derived by Neishtadt for the periodic case and Simó in the quasi-periodic scenario.

  12. The long-term average spectrum as a measure of voice stability.

    PubMed

    Mendoza, E; Muñoz, J; Valencia Naranjo, N

    1996-01-01

    This paper aims at providing a methodological answer to the topic of voice stability. Seventeen experimental subjects read a standard text 5 times during a 2-week period. The data were obtained through the Long-Term Average Spectrum and were analyzed with a twofold procedure: (i) analysis of the absolute energy values at different frequency points throughout the length of the spectrum, and (ii) analysis of the relative values obtained by subtraction of each of two consecutive frequency values analyzed. The results obtained with the first procedure indicate that the differences that exist between the various sessions are centered mainly in the frequencies below 0.6 kHz and in the area of 4 kHz, following a linear tendency. The fact that the differences between sessions disappear when employing relative measures may indicate that the utilization of these measures eliminates the sources of systematic or aleatoric error can be introduced during a recording or in the period of time between two consecutive recording sessions. PMID:8765550

  13. Measurement of absolute T cell receptor rearrangement diversity.

    PubMed

    Baum, Paul D; Young, Jennifer J; McCune, Joseph M

    2011-05-31

    T cell receptor (TCR) diversity is critical for adaptive immunity. Existing methods for measuring such diversity are qualitative, expensive, and/or of uncertain accuracy. Here, we describe a method and associated reagents for estimating the absolute number of unique TCR Vβ rearrangements present in a given number of cells or volume of blood. Compared to next generation sequencing, this method is rapid, reproducible, and affordable. Diversity of a sample is calculated based on three independent measurements of one Vβ-Jβ family of TCR rearrangements at a time. The percentage of receptors using the given Vβ gene is determined by flow cytometric analysis of T cells stained with anti-Vβ family antibodies. The percentage of receptors using the Vβ gene in combination with the chosen Jβ gene is determined by quantitative PCR. Finally, the absolute clonal diversity of the Vβ-Jβ family is determined with the AmpliCot method of DNA hybridization kinetics, by interpolation relative to PCR standards of known sequence diversity. These three component measurements are reproducible and linear. Using titrations of known numbers of input cells, we show that the TCR diversity estimates obtained by this approach approximate expected values within a two-fold error, have a coefficient of variation of 20%, and yield similar results when different Vβ-Jβ pairs are chosen. The ability to obtain accurate measurements of the total number of different TCR gene rearrangements in a cell sample should be useful for basic studies of the adaptive immune system as well as in clinical studies of conditions such as HIV disease, transplantation, aging, and congenital immunodeficiencies. PMID:21385585

  14. Absolute doubly differential bremsstrahlung cross sections from rare gas atoms

    NASA Astrophysics Data System (ADS)

    Portillo, Salvador

    The absolute doubly differential bremsstrahlung cross section has been measured for 28 and 50 keV electrons incident on the rare gases Xe, Kr, Ar and Ne. The cross sections are differential with respect to energy and photon emission. A SiLi solid state detector measured data at 90° with respect to the beam line. A thorough analysis of the experimental systematic error yielded a high degree of confidence in the experimental data. The absolute bremsstrahlung doubly differential cross sections provided for a rigorous test of the normal bremsstrahlung theory, tabulated by Kissel, Quarles and Pratt1 (KQP) and of the SA theory2 that includes the contribution from polarization bremsstrahlung. To test the theories a comparison of the overall magnitude of the cross section as well as comparison of the photon energy dependence was carried out. The KQP theoretical values underestimated the magnitude of the cross section for all targets and for both energies. The SA values were in excellent agreement with the 28 keV data. For the 50keV data the fit was also very good. However, there were energy regions where there was a small discrepancy between the theory and the data. This suggests that the Polarization Bremsstrahlung (PB) mechanism does contribute to the overall spectrum and is detectable in this parameter space. 1Kissel, L., Quarles, C. A., Pratt, R. H., Atom. Data Nucl. Data Tables 28, 381 (1983). 2Avdonina N. B., Pratt, R. H., J. Phys. B: At. Mol. Opt. Phys. 32 4261 (1999).

  15. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  16. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886

  17. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  18. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  19. The study of the nonlinear correction of the FMCW absolute distance measurement using frequency-sampling and precision analysis

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Gan, Yu; Chen, Fengdong; Liu, Bingguo; Zhuang, Zhitao; Xu, Xinke; Liu, Guodong

    2014-12-01

    This article uses the external cavity laser to realize FMCW high precision absolute distance measurement, as the external cavity laser owns the advantage of large tuning range of frequency. Firstly, aim at the problem of nonlinear tuning of the external cavity laser, a study of method of frequency-sampling has been shown. Secondly, in this article the mathematical model of the absolute dis tance measurement system has been established, and the sources of the errors of the FMCW absolute distance measurement has been analyzed, and the accuracy model has been established. Finally, a ball which is put at a distance about 3 meters is measured, and the random error is 0.3479μm, the standard uncertainty of measurement system is 0.3479μm+3.141Rppm.

  20. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  1. Regional absolute conductivity reconstruction using projected current density in MREIT

    NASA Astrophysics Data System (ADS)

    Sajib, Saurav Z. K.; Kim, Hyung Joong; In Kwon, Oh; Woo, Eung Je

    2012-09-01

    the reconstructed regional projected current density, we propose a direct non-iterative algorithm to reconstruct the absolute conductivity in the ROI. The numerical simulations in the presence of various degrees of noise, as well as a phantom MRI imaging experiment showed that the proposed method reconstructs the regional absolute conductivity in a ROI within a subject including the defective regions. In the simulation experiment, the relative L2-mode errors of the reconstructed regional and global conductivities were 0.79 and 0.43, respectively, using a noise level of 50 db in the defective region.

  2. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  3. LANDSAT-4 horizon scanner full orbit data averages

    NASA Technical Reports Server (NTRS)

    Stanley, J. P.; Bilanow, S.

    1983-01-01

    Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.

  4. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  5. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  6. Error awareness as evidence accumulation: effects of speed-accuracy trade-off on error signaling

    PubMed Central

    Steinhauser, Marco; Yeung, Nick

    2012-01-01

    Errors in choice tasks have been shown to elicit a cascade of characteristic components in the human event-related potential (ERPs)—the error-related negativity (Ne/ERN) and the error positivity (Pe). Despite the large number of studies concerned with these components, it is still unclear how they relate to error awareness as measured by overt error signaling responses. In the present study, we considered error awareness as a decision process in which evidence for an error is accumulated until a decision criterion is reached, and hypothesized that the Pe is a correlate of the accumulated decision evidence. To test the prediction that the amplitude of the Pe varies as a function of the strength and latency of the accumulated evidence for an error, we manipulated the speed-accuracy trade-off (SAT) in a brightness discrimination task while participants signaled the occurrence of errors. Based on a previous modeling study, we predicted that lower speed pressure should be associated with weaker evidence for an error and, thus, with smaller Pe amplitudes. As predicted, average Pe amplitude was decreased and error signaling was impaired in a low speed pressure condition compared to a high speed pressure condition. In further analyses, we derived single-trial Pe amplitudes using a logistic regression approach. Single-trial amplitudes robustly predicted the occurrence of signaling responses on a trial-by-trial basis. These results confirm the predictions of the evidence accumulation account, supporting the notion that the Pe reflects accumulated evidence for an error and that this evidence drives the emergence of error awareness. PMID:22905027

  7. Absolute calibration of the Auger fluorescence detectors

    SciTech Connect

    Bauleo, P.; Brack, J.; Garrard, L.; Harton, J.; Knapik, R.; Meyhandan, R.; Rovero, A.C.; Tamashiro, A.; Warner, D.

    2005-07-01

    Absolute calibration of the Pierre Auger Observatory fluorescence detectors uses a light source at the telescope aperture. The technique accounts for the combined effects of all detector components in a single measurement. The calibrated 2.5 m diameter light source fills the aperture, providing uniform illumination to each pixel. The known flux from the light source and the response of the acquisition system give the required calibration for each pixel. In the lab, light source uniformity is studied using CCD images and the intensity is measured relative to NIST-calibrated photodiodes. Overall uncertainties are presently 12%, and are dominated by systematics.

  8. Absolute rate theories of epigenetic stability

    NASA Astrophysics Data System (ADS)

    Walczak, Aleksandra M.; Onuchic, José N.; Wolynes, Peter G.

    2005-12-01

    Spontaneous switching events in most characterized genetic switches are rare, resulting in extremely stable epigenetic properties. We show how simple arguments lead to theories of the rate of such events much like the absolute rate theory of chemical reactions corrected by a transmission factor. Both the probability of the rare cellular states that allow epigenetic escape and the transmission factor depend on the rates of DNA binding and unbinding events and on the rates of protein synthesis and degradation. Different mechanisms of escape from the stable attractors occur in the nonadiabatic, weakly adiabatic, and strictly adiabatic regimes, characterized by the relative values of those input rates. rate theory | stochastic gene expression | gene switches

  9. Characterization of the DARA solar absolute radiometer

    NASA Astrophysics Data System (ADS)

    Finsterle, W.; Suter, M.; Fehlmann, A.; Kopp, G.

    2011-12-01

    The Davos Absolute Radiometer (DARA) prototype is an Electrical Substitution Radiometer (ESR) which has been developed as a successor of the PMO6 type on future space missions and ground based TSI measurements. The DARA implements an improved thermal design of the cavity detector and heat sink assembly to minimize air-vacuum differences and to maximize thermal symmetry of measuring and compensating cavity. The DARA also employs an inverted viewing geometry to reduce internal stray light. We will report on the characterization and calibration experiments which were carried out at PMOD/WRC and LASP (TRF).

  10. Absolute Priority for a Vehicle in VANET

    NASA Astrophysics Data System (ADS)

    Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad

    In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide absolute priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling time. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.

  11. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  12. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  13. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  14. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-01

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc. PMID:16652369

  15. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  16. Sentinel-2/MSI absolute calibration: first results

    NASA Astrophysics Data System (ADS)

    Lonjou, V.; Lachérade, S.; Fougnie, B.; Gamet, P.; Marcq, S.; Raynaud, J.-L.; Tremas, T.

    2015-10-01

    Sentinel-2 is an optical imaging mission devoted to the operational monitoring of land and coastal areas. It is developed in partnership between the European Commission and the European Space Agency. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. It will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). CNES is involved in the instrument commissioning in collaboration with ESA. This paper reviews all the techniques that will be used to insure an absolute calibration of the 13 spectral bands better than 5% (target 3%), and will present the first results if available. First, the nominal calibration technique, based on an on-board sun diffuser, is detailed. Then, we show how vicarious calibration methods based on acquisitions over natural targets (oceans, deserts, and Antarctica during winter) will be used to check and improve the accuracy of the absolute calibration coefficients. Finally, the verification scheme, exploiting photometer in-situ measurements over Lacrau plain, is described. A synthesis, including spectral coherence, inter-methods agreement and temporal evolution, will conclude the paper.

  17. Absolute Electron Extraction Efficiency of Liquid Xenon

    NASA Astrophysics Data System (ADS)

    Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter

    2016-03-01

    Dual phase liquid/gas xenon time projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not absolute, assuming efficiency plateaus at 100%. I will present an absolute EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.

  18. Why to compare absolute numbers of mitochondria.

    PubMed

    Schmitt, Sabine; Schulz, Sabine; Schropp, Eva-Maria; Eberhagen, Carola; Simmons, Alisha; Beisker, Wolfgang; Aichler, Michaela; Zischka, Hans

    2014-11-01

    Prompted by pronounced structural differences between rat liver and rat hepatocellular carcinoma mitochondria, we suspected these mitochondrial populations to differ massively in their molecular composition. Aiming to reveal these mitochondrial differences, we came across the issue on how to normalize such comparisons and decided to focus on the absolute number of mitochondria. To this end, fluorescently stained mitochondria were quantified by flow cytometry. For rat liver mitochondria, this approach resulted in mitochondrial protein contents comparable to earlier reports using alternative methods. We determined similar protein contents for rat liver, heart and kidney mitochondria. In contrast, however, lower protein contents were determined for rat brain mitochondria and for mitochondria from the rat hepatocellular carcinoma cell line McA 7777. This result challenges mitochondrial comparisons that rely on equal protein amounts as a typical normalization method. Exemplarily, we therefore compared the activity and susceptibility toward inhibition of complex II of rat liver and hepatocellular carcinoma mitochondria and obtained significant discrepancies by either normalizing to protein amount or to absolute mitochondrial number. Importantly, the latter normalization, in contrast to the former, demonstrated a lower complex II activity and higher susceptibility toward inhibition in hepatocellular carcinoma mitochondria compared to liver mitochondria. These findings demonstrate that solely normalizing to protein amount may obscure essential molecular differences between mitochondrial populations.

  19. Relational versus absolute representation in categorization.

    PubMed

    Edwards, Darren J; Pothos, Emmanuel M; Perlman, Amotz

    2012-01-01

    This study explores relational-like and absolute-like representations in categorization. Although there is much evidence that categorization processes can involve information about both the particular physical properties of studied instances and abstract (relational) properties, there has been little work on the factors that lead to one kind of representation as opposed to the other. We tested 370 participants in 6 experiments, in which participants had to classify new items into predefined artificial categories. In 4 experiments, we observed a predominantly relational-like mode of classification, and in 2 experiments we observed a shift toward an absolute-like mode of classification. These results suggest 3 factors that promote a relational-like mode of classification: fewer items per group, more training groups, and the presence of a time delay. Overall, we propose that less information about the distributional properties of a category or weaker memory traces for the category exemplars (induced, e.g., by having smaller categories or a time delay) can encourage relational-like categorization.

  20. Transient absolute robustness in stochastic biochemical networks.

    PubMed

    Enciso, German A

    2016-08-01

    Absolute robustness allows biochemical networks to sustain a consistent steady-state output in the face of protein concentration variability from cell to cell. This property is structural and can be determined from the topology of the network alone regardless of rate parameters. An important question regarding these systems is the effect of discrete biochemical noise in the dynamical behaviour. In this paper, a variable freezing technique is developed to show that under mild hypotheses the corresponding stochastic system has a transiently robust behaviour. Specifically, after finite time the distribution of the output approximates a Poisson distribution, centred around the deterministic mean. The approximation becomes increasingly accurate, and it holds for increasingly long finite times, as the total protein concentrations grow to infinity. In particular, the stochastic system retains a transient, absolutely robust behaviour corresponding to the deterministic case. This result contrasts with the long-term dynamics of the stochastic system, which eventually must undergo an extinction event that eliminates robustness and is completely different from the deterministic dynamics. The transiently robust behaviour may be sufficient to carry out many forms of robust signal transduction and cellular decision-making in cellular organisms. PMID:27581485

  1. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  2. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  3. A Conceptual Approach to Absolute Value Equations and Inequalities

    ERIC Educational Resources Information Center

    Ellis, Mark W.; Bryson, Janet L.

    2011-01-01

    The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…

  4. Using, Seeing, Feeling, and Doing Absolute Value for Deeper Understanding

    ERIC Educational Resources Information Center

    Ponce, Gregorio A.

    2008-01-01

    Using sticky notes and number lines, a hands-on activity is shared that anchors initial student thinking about absolute value. The initial point of reference should help students successfully evaluate numeric problems involving absolute value. They should also be able to solve absolute value equations and inequalities that are typically found in…

  5. SAR image registration in absolute coordinates using GPS carrier phase position and velocity information

    SciTech Connect

    Burgett, S.; Meindl, M.

    1994-09-01

    It is useful in a variety of military and commercial application to accurately register the position of synthetic aperture radar (SAR) imagery in absolute coordinates. The two basic SAR measurements, range and doppler, can be used to solve for the position of the SAR image. Imprecise knowledge of the SAR collection platform`s position and velocity vectors introduce errors in the range and doppler measurements and can cause the apparent location of the SAR image on the ground to be in error by tens of meters. Recent advances in carrier phase GPS techniques can provide an accurate description of the collection vehicle`s trajectory during the image formation process. In this paper, highly accurate carrier phase GPS trajectory information is used in conjunction with SAR imagery to demonstrate a technique for accurate registration of SAR images in WGS-84 coordinates. Flight test data will be presented that demonstrates SAR image registration errors of less than 4 meters.

  6. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  7. Absolute paleointensity from Hawaiian lavas younger than 35 ka

    USGS Publications Warehouse

    Valet, J.-P.; Tric, E.; Herrero-Bervera, E.; Meynadier, L.; Lockwood, J.P.

    1998-01-01

    Paleointensity studies have been conducted in air and in argon atmosphere on nine lava flows with radiocarbon ages distributed between 3.3 and 28.2 ka from the Mauna Loa volcano in the big island of Hawaii. Determinations of paleointensity obtained at eight sites depict the same overall pattern as the previous results for the same period in Hawaii, although the overall average field intensity appears to be lower. Since the present results were determined at higher temperatures than in the previous studies, this discrepancy raises questions regarding the selection of low versus high-temperature segments that are usually made for absolute paleointensity. The virtual dipole moments are similar to those displayed by the worldwide data set obtained from dated lava flows. When averaged within finite time intervals, the worldwide values match nicely the variations of the Sint-200 synthetic record of relative paleointensity and confirm the overall decrease of the dipole field intensity during most of this period. The convergence between the existing records at Hawaii and the rest of the world does not favour the presence of persistent strong non-dipole components beneath Hawaii for this period.

  8. SU-E-T-152: Error Sensitivity and Superiority of a Protocol for 3D IMRT Quality Assurance

    SciTech Connect

    Gueorguiev, G; Cotter, C; Turcotte, J; Sharp, G; Crawford, B; Mah'D, M

    2014-06-01

    Purpose: To test if the parameters included in our 3D QA protocol with current tolerance levels are able to detect certain errors and show the superiority of 3D QA method over single ion chamber measurements and 2D gamma test by detecting most of the introduced errors. The 3D QA protocol parameters are: TPS and measured average dose difference, 3D gamma test with 3mmDTA/3% test parameters, and structure volume for which the TPS predicted and measured absolute dose difference is greater than 6%. Methods: Two prostate and two thoracic step-and-shoot IMRT patients were investigated. The following errors were introduced to each original treatment plan: energy switched from 6MV to 10MV, linac jaws retracted to 15cmx15cm, 1,2,3 central MLC leaf pairs retracted behind the jaws, single central MLC leaf put in or out of the treatment field, Monitor Units (MU) increased and decreased by 1 and 3%, collimator off by 5 and 15 degrees, detector shifted by 5mm to the left and right, gantry treatment angle off by 5 and 15 degrees. QA was performed on each plan using single ion chamber, 2D ion chamber array for 2D gamma analysis and using IBA's COMPASS system for 3D QA. Results: Out of the three tested QA methods single ion chamber performs the worst not detecting subtle errors. 3D QA proves to be the superior out of the three methods detecting all of introduced errors, except 10MV and 1% MU change, and MLC rotated (those errors were not detected by any QA methods tested). Conclusion: As the way radiation is delivered evolves, so must the QA. We believe a diverse set of 3D statistical parameters applied both to OAR and target plan structures provides the highest level of QA.

  9. On-Orbit Absolute Radiance Standard for Future IR Remote Sensing Instruments

    NASA Astrophysics Data System (ADS)

    Best, F. A.; Adler, D. P.; Pettersen, C.; Revercomb, H. E.; Gero, P. J.; Taylor, J. K.; Knuteson, R. O.; Perepezko, J. H.

    2010-12-01

    Future NASA infrared remote sensing missions, including the climate benchmark CLARREO mission will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high-emissivity (>0.999) calibration blackbodies with emissivity uncertainty of better than 0.06%, and absolute temperature uncertainties of better than 0.045K (3 sigma). Key elements of an On-Orbit Absolute Radiance Standard (OARS) meeting these stringent requirements have been demonstrated in the laboratory at the University of Wisconsin and are undergoing Technology Readiness Level (TRL) advancement under the NASA Instrument Incubator Program (IIP). We present the new technologies that underlie the OARS and the results of laboratory testing that demonstrate the required accuracy is being met. The underlying technologies include on-orbit absolute temperature calibration using the transient melt signatures of small quantities (<1g) of reference materials (gallium, water, and mercury) imbedded in the blackbody cavity; and on-orbit cavity spectral emissivity measurement using a heated halo. For these emissivity measurements, a carefully baffled heated cylinder is placed in front of a blackbody in the infrared spectrometer system, and the combined radiance of the blackbody and Heated Halo reflection is observed. Knowledge of key temperatures and the viewing geometry allow the blackbody cavity spectral emissivity to be calculated. This work will culminate with an integrated subsystem that can provide on-orbit end-to-end radiometric accuracy validation for infrared remote sensing instruments.

  10. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  11. Absolute position total internal reflection microscopy with an optical tweezer

    PubMed Central

    Liu, Lulu; Woolf, Alexander; Rodriguez, Alejandro W.; Capasso, Federico

    2014-01-01

    A noninvasive, in situ calibration method for total internal reflection microscopy (TIRM) based on optical tweezing is presented, which greatly expands the capabilities of this technique. We show that by making only simple modifications to the basic TIRM sensing setup and procedure, a probe particle’s absolute position relative to a dielectric interface may be known with better than 10 nm precision out to a distance greater than 1 μm from the surface. This represents an approximate 10× improvement in error and 3× improvement in measurement range over conventional TIRM methods. The technique’s advantage is in the direct measurement of the probe particle’s scattering intensity vs. height profile in situ, rather than relying on assumptions, inexact system analogs, or detailed knowledge of system parameters for calibration. To demonstrate the improved versatility of the TIRM method in terms of tunability, precision, and range, we show our results for the hindered near-wall diffusion coefficient for a spherical dielectric particle. PMID:25512542

  12. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  13. Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method

    PubMed Central

    Wang, Lan; Li, Runze

    2009-01-01

    Summary Shrinkage-type variable selection procedures have recently seen increasing applications in biomedical research. However, their performance can be adversely influenced by outliers in either the response or the covariate space. This paper proposes a weighted Wilcoxon-type smoothly clipped absolute deviation (WW-SCAD) method, which deals with robust variable selection and robust estimation simultaneously. The new procedure can be conveniently implemented with the statistical software R. We establish that the WW-SCAD correctly identifies the set of zero coefficients with probability approaching one and estimates the nonzero coefficients with the rate n−1/2. Moreover, with appropriately chosen weights the WW-SCAD is robust with respect to outliers in both the x and y directions. The important special case with constant weights yields an oracle-type estimator with high efficiency at the presence of heavier-tailed random errors. The robustness of the WW-SCAD is partly justified by its asymptotic performance under local shrinking contamination. We propose a BIC-type tuning parameter selector for the WW-SCAD. The performance of the WW-SCAD is demonstrated via simulations and by an application to a study that investigates the effects of personal characteristics and dietary factors on plasma beta-carotene level. PMID:18647294

  14. Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.

    2013-03-01

    We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.

  15. The Absolute Radiometric Calibration of Space - Sensors.

    NASA Astrophysics Data System (ADS)

    Holm, Ronald Gene

    1987-09-01

    The need for absolute radiometric calibration of space-based sensors will continue to increase as new generations of space sensors are developed. A reflectance -based in-flight calibration procedure is used to determine the radiance reaching the entrance pupil of the sensor. This procedure uses ground-based measurements coupled with a radiative transfer code to characterize the effects the atmosphere has on the signal reaching the sensor. The computed radiance is compared to the digital count output of the sensor associated with the image of a test site. This provides an update to the preflight calibration of the system and a check on the on-board internal calibrator. This calibration procedure was used to perform a series of five calibrations of the Landsat-5 Thematic Mapper (TM). For the 12 measurements made in TM bands 1-3, the RMS variation from the mean as a percentage of the mean is (+OR-) 1.9%, and for measurements in the IR, TM bands 4,5, and 7, the value is (+OR-) 3.4%. The RMS variation for all 23 measurements is (+OR-) 2.8%. The absolute calibration techniques were put to another test with a series of three calibration of the SPOT-1 High Resolution Visible, (HRV), sensors. The ratio, HRV-2/HRV-1, of absolute calibration coefficients compared very well with ratios of histogrammed data obtained when the cameras simultaneously imaged the same ground site. Bands PA, B1 and B3 agreed to within 3%, while band B2 showed a 7% difference. The procedure for performing a satellite calibration was then used to demonstrate how a calibrated satellite sensor can be used to quantitatively evaluate surface reflectance over a wide range of surface features. Predicted reflectance factors were compared to values obtained from aircraft -based radiometer data. This procedure was applied on four dates with two different surface conditions per date. A strong correlation, R('2) = .996, was shown between reflectance values determined from satellite imagery and low-flying aircraft

  16. Comparison of Using Relative and Absolute PCV Corrections in Short Baseline GNSS Observation Processing

    NASA Astrophysics Data System (ADS)

    Dawidowicz, Karol

    2011-01-01

    GNSS antenna phase center variations (PCV) are defined as shifts in positions depending on the observed elevation angle and azimuth to the satellite. When identical antennae are used in relative measurement the phase center variations will cancel out, particularly over short baselines. When different antennae are used, even on short baselines, ignoring these phase center variations can lead to serious (up to 10 cm) vertical errors. The only way to avoid these errors, when mixing different antenna types, is by applying antenna phase center variation models in processing. Till the 6th November 2006, the International GNSS Service used relative phase center models for GNSS antenna receivers. Then absolute calibration models, developed by the company "Geo++", started to be used. These models involved significant differences on the scale of GNSS networks compared to the VLBI and SLR measurements. The differences were due to the lack of the GNSS satellite antenna calibration models. When this problem was sufficiently resolved, the IGS decided to switch from relative to absolute models for both satellites and receivers. This decision caused significant variations to the results of the GNSS network solutions. The aim of this paper is to study the height differences in short baseline GNSS observations processing when different calibration models are used. The analysis was done using GNSS data collected at short baselines moved with different receiver antennas. The results of calculations show, that switching from relative to absolute receiver antenna PCV models has a significant effect on GNSS network solutions, particularly in high accuracy applications.

  17. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  18. Use of Absolute and Comparative Performance Feedback in Absolute and Comparative Judgments and Decisions

    ERIC Educational Resources Information Center

    Moore, Don A.; Klein, William M. P.

    2008-01-01

    Which matters more--beliefs about absolute ability or ability relative to others? This study set out to compare the effects of such beliefs on satisfaction with performance, self-evaluations, and bets on future performance. In Experiment 1, undergraduate participants were told they had answered 20% correct, 80% correct, or were not given their…

  19. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  20. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  1. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  2. Absolute calibration of ultraviolet filter photometry

    NASA Technical Reports Server (NTRS)

    Bless, R. C.; Fairchild, T.; Code, A. D.

    1972-01-01

    The essential features of the calibration procedure can be divided into three parts. First, the shape of the bandpass of each photometer was determined by measuring the transmissions of the individual optical components and also by measuring the response of the photometer as a whole. Secondly, each photometer was placed in the essentially-collimated synchrotron radiation bundle maintained at a constant intensity level, and the output signal was determined from about 100 points on the objective. Finally, two or three points on the objective were illuminated by synchrotron radiation at several different intensity levels covering the dynamic range of the photometers. The output signals were placed on an absolute basis by the electron counting technique described earlier.

  3. Absolute measurements of fast neutrons using yttrium

    SciTech Connect

    Roshan, M. V.; Springham, S. V.; Rawat, R. S.; Lee, P.; Krishnan, M.

    2010-08-15

    Yttrium is presented as an absolute neutron detector for pulsed neutron sources. It has high sensitivity for detecting fast neutrons. Yttrium has the property of generating a monoenergetic secondary radiation in the form of a 909 keV gamma-ray caused by inelastic neutron interaction. It was calibrated numerically using MCNPX and does not need periodic recalibration. The total yttrium efficiency for detecting 2.45 MeV neutrons was determined to be f{sub n}{approx}4.1x10{sup -4} with an uncertainty of about 0.27%. The yttrium detector was employed in the NX2 plasma focus experiments and showed the neutron yield of the order of 10{sup 8} neutrons per discharge.

  4. Absolute Measurement of Electron Cloud Density

    SciTech Connect

    Covo, M K; Molvik, A W; Cohen, R H; Friedman, A; Seidl, P A; Logan, G; Bieniosek, F; Baca, D; Vay, J; Orlando, E; Vujic, J L

    2007-06-21

    Beam interaction with background gas and walls produces ubiquitous clouds of stray electrons that frequently limit the performance of particle accelerator and storage rings. Counterintuitively we obtained the electron cloud accumulation by measuring the expelled ions that are originated from the beam-background gas interaction, rather than by measuring electrons that reach the walls. The kinetic ion energy measured with a retarding field analyzer (RFA) maps the depressed beam space-charge potential and provides the dynamic electron cloud density. Clearing electrode current measurements give the static electron cloud background that complements and corroborates with the RFA measurements, providing an absolute measurement of electron cloud density during a 5 {micro}s duration beam pulse in a drift region of the magnetic transport section of the High-Current Experiment (HCX) at LBNL.

  5. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  6. Absolute calibration of remote sensing instruments

    NASA Astrophysics Data System (ADS)

    Biggar, S. F.; Bruegge, C. J.; Capron, B. A.; Castle, K. R.; Dinguirard, M. C.; Holm, R. G.; Lingg, L. J.; Mao, Y.; Palmer, J. M.; Phillips, A. L.

    1985-12-01

    Source-based and detector-based methods for the absolute radiometric calibration of a broadband field radiometer are described. Using such a radiometer, calibrated by both methods, the calibration of the integrating sphere used in the preflight calibration of the Thematic Mapper was redetermined. The results are presented. The in-flight calibration of space remote sensing instruments is discussed. A method which uses the results of ground-based reflectance and atmospheric measurements as input to a radiative transfer code to predict the radiance at the instrument is described. A calibrated, helicopter-mounted radiometer is used to determine the radiance levels at intermediate altitudes to check the code predictions. Results of such measurements for the calibration of the Thematic Mapper on Landsat 5 and an analysis that shows the value of such measurements are described.

  7. Swarm's Absolute Scalar Magnetometer metrological performances

    NASA Astrophysics Data System (ADS)

    Leger, J.; Fratter, I.; Bertrand, F.; Jager, T.; Morales, S.

    2012-12-01

    The Absolute Scalar Magnetometer (ASM) has been developed for the ESA Earth Observation Swarm mission, planned for launch in November 2012. As its Overhauser magnetometers forerunners flown on Oersted and Champ satellites, it will deliver high resolution scalar measurements for the in-flight calibration of the Vector Field Magnetometer manufactured by the Danish Technical University. Latest results of the ground tests carried out to fully characterize all parameters that may affect its accuracy, both at instrument and satellite level, will be presented. In addition to its baseline function, the ASM can be operated either at a much higher sampling rate (burst mode at 250 Hz) or in a dual mode where it also delivers vector field measurements as a by-product. The calibration procedure and the relevant vector performances will be discussed.

  8. MAGSAT: Vector magnetometer absolute sensor alignment determination

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1981-01-01

    A procedure is described for accurately determining the absolute alignment of the magnetic axes of a triaxial magnetometer sensor with respect to an external, fixed, reference coordinate system. The method does not require that the magnetic field vector orientation, as generated by a triaxial calibration coil system, be known to better than a few degrees from its true position, and minimizes the number of positions through which a sensor assembly must be rotated to obtain a solution. Computer simulations show that accuracies of better than 0.4 seconds of arc can be achieved under typical test conditions associated with existing magnetic test facilities. The basic approach is similar in nature to that presented by McPherron and Snare (1978) except that only three sensor positions are required and the system of equations to be solved is considerably simplified. Applications of the method to the case of the MAGSAT Vector Magnetometer are presented and the problems encountered discussed.

  9. Effects of confining pressure, pore pressure and temperature on absolute permeability. SUPRI TR-27

    SciTech Connect

    Gobran, B.D.; Ramey, H.J. Jr.; Brigham, W.E.

    1981-10-01

    This study investigates absolute permeability of consolidated sandstone and unconsolidated sand cores to distilled water as a function of the confining pressure on the core, the pore pressure of the flowing fluid and the temperature of the system. Since permeability measurements are usually made in the laboratory under conditions very different from those in the reservoir, it is important to know the effect of various parameters on the measured value of permeability. All studies on the effect of confining pressure on absolute permeability have found that when the confining pressure is increased, the permeability is reduced. The studies on the effect of temperature have shown much less consistency. This work contradicts the past Stanford studies by finding no effect of temperature on the absolute permeability of unconsolidated sand or sandstones to distilled water. The probable causes of the past errors are discussed. It has been found that inaccurate measurement of temperature at ambient conditions and non-equilibrium of temperature in the core can lead to a fictitious permeability reduction with temperature increase. The results of this study on the effect of confining pressure and pore pressure support the theory that as confining pressure is increased or pore pressure decreased, the permeability is reduced. The effects of confining pressure and pore pressure changes on absolute permeability are given explicitly so that measurements made under one set of confining pressure/pore pressure conditions in the laboratory can be extrapolated to conditions more representative of the reservoir.

  10. In situ measurement of leaf chlorophyll concentration: analysis of the optical/absolute relationship.

    PubMed

    Parry, Christopher; Blonquist, J Mark; Bugbee, Bruce

    2014-11-01

    In situ optical meters are widely used to estimate leaf chlorophyll concentration, but non-uniform chlorophyll distribution causes optical measurements to vary widely among species for the same chlorophyll concentration. Over 30 studies have sought to quantify the in situ/in vitro (optical/absolute) relationship, but neither chlorophyll extraction nor measurement techniques for in vitro analysis have been consistent among studies. Here we: (1) review standard procedures for measurement of chlorophyll; (2) estimate the error associated with non-standard procedures; and (3) implement the most accurate methods to provide equations for conversion of optical to absolute chlorophyll for 22 species grown in multiple environments. Tests of five Minolta (model SPAD-502) and 25 Opti-Sciences (model CCM-200) meters, manufactured from 1992 to 2013, indicate that differences among replicate models are less than 5%. We thus developed equations for converting between units from these meter types. There was no significant effect of environment on the optical/absolute chlorophyll relationship. We derive the theoretical relationship between optical transmission ratios and absolute chlorophyll concentration and show how non-uniform distribution among species causes a variable, non-linear response. These results link in situ optical measurements with in vitro chlorophyll concentration and provide insight to strategies for radiation capture among diverse species.

  11. Prelaunch absolute radiometric calibration of the reflective bands on the LANDSAT-4 protoflight Thematic Mapper

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Ball, D. L.; Leung, K. C.; Walker, J. A.

    1984-01-01

    The results of the absolute radiometric calibration of the LANDSAT 4 thematic mapper, as determined during pre-launch tests with a 122 cm integrating sphere, are presented. Detailed results for the best calibration of the protoflight TM are given, as well as summaries of other tests performed on the sensor. The dynamic range of the TM is within a few per cent of that required in all bands, except bands 1 and 3. Three detectors failed to pass the minimum SNR specified for their respective bands: band 5, channel 3 (dead), band 2, and channels 2 and 4 (noisy or slow response). Estimates of the absolute calibration accuracy for the TM show that the detectors are typically calibrated to 5% absolute error for the reflective bands; 10% full-scale accuracy was specified. Ten tests performed to transfer the detector absolute calibration to the internal calibrator show a 5% range at full scale in the transfer calibration; however, in two cases band 5 showed a 10% and a 7% difference.

  12. Error Representation in Time For Compressible Flow Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    Time plays an essential role in most real world fluid mechanics problems, e.g. turbulence, combustion, acoustic noise, moving geometries, blast waves, etc. Time dependent calculations now dominate the computational landscape at the various NASA Research Centers but the accuracy of these computations is often not well understood. In this presentation, we investigate error representation (and error control) for time-periodic problems as a prelude to the investigation of feasibility of error control for stationary statistics and space-time averages. o These statistics and averages (e.g. time-averaged lift and drag forces) are often the output quantities sought by engineers. o For systems such as the Navier-Stokes equations, pointwise error estimates deteriorate rapidly which increasing Reynolds number while statistics and averages may remain well behaved.

  13. Total-pressure-tube averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.

    1973-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. The tests were performed at a pressure level of 1 bar, for Mach numbers up to near 1, and frequencies up to 3 kHz. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonances which further increased the indicated pressure were encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  14. Absolute reliability of hamstring to quadriceps strength imbalance ratios calculated using peak torque, joint angle-specific torque and joint ROM-specific torque values.

    PubMed

    Ayala, F; De Ste Croix, M; Sainz de Baranda, P; Santonja, F

    2012-11-01

    The main purpose of this study was to determine the absolute reliability of conventional (H/Q(CONV)) and functional (H/Q(FUNC)) hamstring to quadriceps strength imbalance ratios calculated using peak torque values, 3 different joint angle-specific torque values (10°, 20° and 30° of knee flexion) and 4 different joint ROM-specific average torque values (0-10°, 11-20°, 21-30° and 0-30° of knee flexion) adopting a prone position in recreational athletes. A total of 50 recreational athletes completed the study. H/Q(CONV) and H/Q(FUNC) ratios were recorded at 3 different angular velocities (60, 180 and 240°/s) on 3 different occasions with a 72-96 h rest interval between consecutive testing sessions. Absolute reliability was examined through typical percentage error (CVTE), percentage change in the mean (CM) and intraclass correlations (ICC) as well as their respective confidence limits. H/Q(CONV) and H/Q(FUNC) ratios calculated using peak torque values showed moderate reliability values, with CM scores lower than 2.5%, CV(TE) values ranging from 16 to 20% and ICC values ranging from 0.3 to 0.7. However, poor absolute reliability scores were shown for H/Q(CONV) and H/Q(FUNC) ratios calculated using joint angle-specific torque values and joint ROM-specific average torque values, especially for H/Q(FUNC) ratios (CM: 1-23%; CV(TE): 22-94%; ICC: 0.1-0.7). Therefore, the present study suggests that the CV(TE) values reported for H/Q(CONV) and H/Q(FUNC) (≈18%) calculated using peak torque values may be sensitive enough to detect large changes usually observed after rehabilitation programmes but not acceptable to examine the effect of preventitive training programmes in healthy individuals. The clinical reliability of hamstring to quadriceps strength ratios calculated using joint angle-specific torque values and joint ROM-specific average torque values are questioned and should be re-evaluated in future research studies.

  15. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  16. Correct averaging in transmission radiography: Analysis of the inverse problem

    NASA Astrophysics Data System (ADS)

    Wagner, Michael; Hampel, Uwe; Bieberle, Martina

    2016-05-01

    Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.

  17. We need to talk about error: causes and types of error in veterinary practice.

    PubMed

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error. PMID:26489997

  18. We need to talk about error: causes and types of error in veterinary practice.

    PubMed

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error.

  19. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  20. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  1. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  2. The dynamics of error growth in a quasigeostrophic channel model

    NASA Technical Reports Server (NTRS)

    Straus, David M.

    1988-01-01

    The objective of the paper is to determine the extent to which baroclinic instability contributes to the growth of errors in simple, yet realistic models of atmospheric flow. The model used here is a two-level quasi-geostrophic channel model. Results of two predictability experiments are reported. In one experiment, the initial condition perturbation was confined to the highest wavenumbers and had an energy of 1 percent of the climatological energy of the model for these scales. In the other experiment, perturbations were put only in the planetary wave and had the same strength relative to climatology as in the first experiment, leading to much larger absolute errors.

  3. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  4. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  5. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  6. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window.

    PubMed

    Bai, Yang; Lu, Yunfeng; Hu, Pengcheng; Wang, Gang; Xu, Jinxin; Zeng, Tao; Li, Zhengkun; Zhang, Zhonghua; Tan, Jiubin

    2016-01-01

    A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF) movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10(-4) pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range. PMID:27187393

  7. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window

    PubMed Central

    Bai, Yang; Lu, Yunfeng; Hu, Pengcheng; Wang, Gang; Xu, Jinxin; Zeng, Tao; Li, Zhengkun; Zhang, Zhonghua; Tan, Jiubin

    2016-01-01

    A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF) movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range. PMID:27187393

  8. The solar absolute spectral irradiance 1150-3173 A - May 17, 1982

    NASA Technical Reports Server (NTRS)

    Mount, G. H.; Rottman, G. J.

    1983-01-01

    The full-disk solar spectral irradiance in the spectral range 1150-3173 A was obtained from a rocket observation above White Sands Missile Range, NM, on May 17, 1982, half way in time between solar maximum and solar minimum. Comparison with measurements made during solar maximum in 1980 indicate a large decrease in the absolute solar irradiance at wavelengths below 1900 A to approximately solar minimum values. No change above 1900 A from solar maximum to this flight was observed to within the errors of the measurements. Irradiance values lower than the Broadfoot results in the 2100-2500 A spectral range are found, but excellent agreement with Broadfoot between 2500 and 3173 A is found. The absolute calibration of the instruments for this flight was accomplished at the National Bureau of Standards Synchrotron Radiation Facility which significantly improves calibration of solar measurements made in this spectral region.

  9. Absolute response of Fuji imaging plate detectors to picosecond-electron bunches

    SciTech Connect

    Zeil, K.; Kraft, S. D.; Jochmann, A.; Kroll, F.; Jahr, W.; Schramm, U.; Karsch, L.; Pawelke, J.; Hidding, B.; Pretzler, G.

    2010-01-15

    The characterization of the absolute number of electrons generated by laser wakefield acceleration often relies on absolutely calibrated FUJI imaging plates (IP), although their validity in the regime of extreme peak currents is untested. Here, we present an extensive study on the dependence of the sensitivity of BAS-SR and BAS-MS IP to picosecond electron bunches of varying charge of up to 60 pC, performed at the electron accelerator ELBE, making use of about three orders of magnitude of higher peak intensity than in prior studies. We demonstrate that the response of the IPs shows no saturation effect and that the BAS-SR IP sensitivity of 0.0081 photostimulated luminescence per electron number confirms surprisingly well data from previous works. However, the use of the identical readout system and handling procedures turned out to be crucial and, if unnoticed, may be an important error source.

  10. Absolute brightness temperature measurements at 3.5-mm wavelength. [of sun, Venus, Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.

    1980-01-01

    Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.

  11. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window.

    PubMed

    Bai, Yang; Lu, Yunfeng; Hu, Pengcheng; Wang, Gang; Xu, Jinxin; Zeng, Tao; Li, Zhengkun; Zhang, Zhonghua; Tan, Jiubin

    2016-05-11

    A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF) movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10(-4) pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range.

  12. Absolute intensity calibration of the 32-channel heterodyne radiometer on experimental advanced superconducting tokamak

    SciTech Connect

    Liu, X.; Zhao, H. L.; Liu, Y. Li, E. Z.; Han, X.; Ti, A.; Hu, L. Q.; Zhang, X. D.; Domier, C. W.; Luhmann, N. C.

    2014-09-15

    This paper presents the results of the in situ absolute intensity calibration for the 32-channel heterodyne radiometer on the experimental advanced superconducting tokamak. The hot/cold load method is adopted, and the coherent averaging technique is employed to improve the signal to noise ratio. Measured spectra and electron temperature profiles are compared with those from an independent calibrated Michelson interferometer, and there is a relatively good agreement between the results from the two different systems.

  13. Absolute value optimization to estimate phase properties of stochastic time series

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1977-01-01

    Most existing deconvolution techniques are incapable of determining phase properties of wavelets from time series data; to assure a unique solution, minimum phase is usually assumed. It is demonstrated, for moving average processes of order one, that deconvolution filtering using the absolute value norm provides an estimate of the wavelet shape that has the correct phase character when the random driving process is nonnormal. Numerical tests show that this result probably applies to more general processes.

  14. Energy dispersive X-ray analysis on an absolute scale in scanning transmission electron microscopy.

    PubMed

    Chen, Z; D'Alfonso, A J; Weyland, M; Taplin, D J; Allen, L J; Findlay, S D

    2015-10-01

    We demonstrate absolute scale agreement between the number of X-ray counts in energy dispersive X-ray spectroscopy using an atomic-scale coherent electron probe and first-principles simulations. Scan-averaged spectra were collected across a range of thicknesses with precisely determined and controlled microscope parameters. Ionization cross-sections were calculated using the quantum excitation of phonons model, incorporating dynamical (multiple) electron scattering, which is seen to be important even for very thin specimens.

  15. Assessment of errors in static electrical impedance tomography with adjacent and trigonometric current patterns.

    PubMed

    Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P

    1997-11-01

    In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.

  16. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    SciTech Connect

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  17. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  18. On the averaging of ratios of specific heats in a multicomponent planetary atmosphere

    NASA Technical Reports Server (NTRS)

    Dubisch, R.

    1974-01-01

    The use of adiabatic relations in the calculation of planetary atmospheres requires knowledge of the ratio of specific heats of a mixture of gases under various pressure and temperature conditions. It is shown that errors introduced by simple averaging of the ratio of specific heats in a multicomponent atmosphere can be roughly 0.4%. Therefore, the gamma-averaging error can become important when integrating through the atmosphere to a large depth.

  19. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  20. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  1. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  2. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  3. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  4. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  5. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  6. A Simple Approach to Experimental Errors

    ERIC Educational Resources Information Center

    Phillips, M. D.

    1972-01-01

    Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)

  7. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  8. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  9. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  10. Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation

    NASA Astrophysics Data System (ADS)

    Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti

    2016-06-01

    This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.

  11. On the absolute alignment of GONG images

    NASA Astrophysics Data System (ADS)

    Toner, C. G.

    2001-01-01

    In order to combine data from the six instruments in the GONG network the alignment of all of the images must be known to a fairly high precision (~0°.1 for GONG Classic and ~0°.01 for GONG+). The relative orientation is obtained using the angular cross-correlation method described by (Toner & Harvey, 1998). To obtain the absolute orientation the Project periodically records a day of drift scans, where the image of the Sun is allowed to drift across the CCD repeatedly throughout the day. These data are then analyzed to deduce the direction of Terrestrial East-West as a function of hour angle (i.e., time) for that instrument. The transit of Mercury on Nov. 15, 1999, which was recorded by three of the GONG instruments, provided an independent check on the current alignment procedures. Here we present a comparison of the alignment of GONG images as deduced from both drift scans and the Mercury transit for two GONG sites: Tucson (GONG+ camera) and Mauna Loa (GONG Classic camera). The agreement is within ~0°.01 for both cameras, however, the scatter is substantially larger for GONG Classic: ~0°.03 compared to ~0°.01 for GONG+.

  12. Climate Absolute Radiance and Refractivity Observatory (CLARREO)

    NASA Technical Reports Server (NTRS)

    Leckey, John P.

    2015-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a mission, led and developed by NASA, that will measure a variety of climate variables with an unprecedented accuracy to quantify and attribute climate change. CLARREO consists of three separate instruments: an infrared (IR) spectrometer, a reflected solar (RS) spectrometer, and a radio occultation (RO) instrument. The mission will contain orbiting radiometers with sufficient accuracy, including on orbit verification, to calibrate other space-based instrumentation, increasing their respective accuracy by as much as an order of magnitude. The IR spectrometer is a Fourier Transform spectrometer (FTS) working in the 5 to 50 microns wavelength region with a goal of 0.1 K (k = 3) accuracy. The FTS will achieve this accuracy using phase change cells to verify thermistor accuracy and heated halos to verify blackbody emissivity, both on orbit. The RS spectrometer will measure the reflectance of the atmosphere in the 0.32 to 2.3 microns wavelength region with an accuracy of 0.3% (k = 2). The status of the instrumentation packages and potential mission options will be presented.

  13. Absolute flux measurements for swift atoms

    NASA Technical Reports Server (NTRS)

    Fink, M.; Kohl, D. A.; Keto, J. W.; Antoniewicz, P.

    1987-01-01

    While a torsion balance in vacuum can easily measure the momentum transfer from a gas beam impinging on a surface attached to the balance, this measurement depends on the accommodation coefficients of the atoms with the surface and the distribution of the recoil. A torsion balance is described for making absolute flux measurements independent of recoil effects. The torsion balance is a conventional taut suspension wire design and the Young modulus of the wire determines the relationship between the displacement and the applied torque. A compensating magnetic field is applied to maintain zero displacement and provide critical damping. The unique feature is to couple the impinging gas beam to the torsion balance via a Wood's horn, i.e., a thin wall tube with a gradual 90 deg bend. Just as light is trapped in a Wood's horn by specular reflection from the curved surfaces, the gas beam diffuses through the tube. Instead of trapping the beam, the end of the tube is open so that the atoms exit the tube at 90 deg to their original direction. Therefore, all of the forward momentum of the gas beam is transferred to the torsion balance independent of the angle of reflection from the surfaces inside the tube.

  14. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    SciTech Connect

    Jian-Zhou Zhu and Gregory W. Hammett

    2011-01-10

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, "On some statistical properties of hydrodynamical and magnetohydrodynamical fields," Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  15. Performance of multi level error correction in binary holographic memory

    NASA Technical Reports Server (NTRS)

    Hanan, Jay C.; Chao, Tien-Hsin; Reyes, George F.

    2004-01-01

    At the Optical Computing Lab in the Jet Propulsion Laboratory (JPL) a binary holographic data storage system was designed and tested with methods of recording and retrieving the binary information. Levels of error correction were introduced to the system including pixel averaging, thresholding, and parity checks. Errors were artificially introduced into the binary holographic data storage system and were monitored as a function of the defect area fraction, which showed a strong influence on data integrity. Average area fractions exceeding one quarter of the bit area caused unrecoverable errors. Efficient use of the available data density was discussed. .

  16. Prospects for the Moon as an SI-Traceable Absolute Spectroradiometric Standard for Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Cramer, C. E.; Stone, T. C.; Lykke, K.; Woodward, J. T.

    2015-12-01

    The Earth's Moon has many physical properties that make it suitable for use as a reference light source for radiometric calibration of remote sensing satellite instruments. Lunar calibration has been successfully applied to many imagers in orbit, including both MODIS instruments and NPP-VIIRS, using the USGS ROLO model to predict the reference exoatmospheric lunar irradiance. Sensor response trending was developed for SeaWIFS with a relative accuracy better than 0.1 % per year with lunar calibration techniques. However, the Moon rarely is used as an absolute reference for on-orbit calibration, primarily due to uncertainties in the ROLO model absolute scale of 5%-10%. But this limitation lies only with the models - the Moon itself is radiometrically stable, and development of a high-accuracy absolute lunar reference is inherently feasible. A program has been undertaken by NIST to collect absolute measurements of the lunar spectral irradiance with absolute accuracy <1 % (k=2), traceable to SI radiometric units. Initial Moon observations were acquired from the Whipple Observatory on Mt. Hopkins, Arizona, elevation 2367 meters, with continuous spectral coverage from 380 nm to 1040 nm at ~3 nm resolution. The lunar spectrometer acquired calibration measurements several times each observing night by pointing to a calibrated integrating sphere source. The lunar spectral irradiance at the top of the atmosphere was derived from a time series of ground-based measurements by a Langley analysis that incorporated measured atmospheric conditions and ROLO model predictions for the change in irradiance resulting from the changing Sun-Moon-Observer geometry throughout each night. Two nights were selected for further study. An extensive error analysis, which includes instrument calibration and atmospheric correction terms, shows a combined standard uncertainty under 1 % over most of the spectral range. Comparison of these two nights' spectral irradiance measurements with predictions

  17. Absolute nuclear material assay using count distribution (LAMBDA) space

    SciTech Connect

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    2015-12-01

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  18. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  19. Antifungal activity of tuberose absolute and some of its constituents.

    PubMed

    Nidiry, Eugene Sebastian J; Babu, C S Bujji

    2005-05-01

    The antifungal activity of the absolute of tuberose (Polianthes tuberosa ) and some of its constituents were evaluated against the mycelial growth of Colletotrichum gloeosporioides on potato-dextrose-agar medium. Tuberose absolute showed only mild activity at a concentration of 500 mg/L. However, three constituents present in the absolute, namely geraniol, indole and methyl anthranilate exhibited significant activity showing total inhibition of the mycelial growth at this concentration.

  20. A rack-mounted precision waveguide-below-cutoff attenuator with an absolute electronic readout

    NASA Technical Reports Server (NTRS)

    Cook, C. C.

    1974-01-01

    A coaxial precision waveguide-below-cutoff attenuator is described which uses an absolute (unambiguous) electronic digital readout of displacement in inches in addition to the usual gear driven mechanical counter-dial readout in decibels. The attenuator is rack-mountable and has the input and output RF connectors in a fixed position. The attenuation rate for 55, 50, and 30 MHz operation is given along with a discussion of sources of errors. In addition, information is included to aid the user in making adjustments on the attenuator should it be damaged or disassembled for any reason.

  1. An absolute interval scale of order for point patterns

    PubMed Central

    Protonotarios, Emmanouil D.; Baum, Buzz; Johnston, Alan; Hunter, Ginger L.; Griffin, Lewis D.

    2014-01-01

    Human observers readily make judgements about the degree of order in planar arrangements of points (point patterns). Here, based on pairwise ranking of 20 point patterns by degree of order, we have been able to show that judgements of order are highly consistent across individuals and the dimension of order has an interval scale structure spanning roughly 10 just-notable-differences (jnd) between disorder and order. We describe a geometric algorithm that estimates order to an accuracy of half a jnd by quantifying the variability of the size and shape of spaces between points. The algorithm is 70% more accurate than the best available measures. By anchoring the output of the algorithm so that Poisson point processes score on average 0, perfect lattices score 10 and unit steps correspond closely to jnds, we construct an absolute interval scale of order. We demonstrate its utility in biology by using this scale to quantify order during the development of the pattern of bristles on the dorsal thorax of the fruit fly. PMID:25079866

  2. The absolute disparity anomaly and the mechanism of relative disparities.

    PubMed

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-06-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1).

  3. The absolute disparity anomaly and the mechanism of relative disparities

    PubMed Central

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-01-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1). PMID:27248566

  4. a Portable Apparatus for Absolute Measurements of the Earth's Gravity.

    NASA Astrophysics Data System (ADS)

    Zumberge, Mark Andrew

    We have developed a new, portable apparatus for making absolute measurements of the acceleration due to the earth's gravity. We use the method of interferometrically determining the acceleration of a freely falling corner -cube prism. The falling object is surrounded by a chamber which is driven vertically inside a fixed vacuum chamber. This falling chamber is servoed to track the falling corner -cube to shield it from drag due to background gas. In addition, the drag-free falling chamber removes the need for a magnetic release, shields the falling object from electrostatic forces, and provides a means of both gently arresting the falling object and quickly returning it to its start position, to allow rapid acquisition of data. A synthesized long period isolation device reduces the noise due to seismic oscillations. A new type of Zeeman laser is used as the light source in the interferometer, and is compared with the wavelength of an iodine stabilized laser. The times of occurrence of 45 interference fringes are measured to within 0.2 nsec over a 20 cm drop and are fit to a quadratic by an on-line minicomputer. 150 drops can be made in ten minutes resulting in a value of g having a precision of 3 to 6 parts in 10('9). Systematic errors have been determined to be less than 5 parts in 10('9) through extensive tests. Three months of gravity data have been obtained with a reproducibility ranging from 5 to 10 parts in 10('9). The apparatus has been designed to be easily portable. Field measurements are planned for the immediate future. An accuracy of 6 parts in 10('9) corresponds to a height sensitivity of 2 cm. Vertical motions in the earth's crust and tectonic density changes that may precede earthquakes are to be investigated using this apparatus.

  5. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and

  6. Evaluation of the Absolute Regional Temperature Potential

    NASA Technical Reports Server (NTRS)

    Shindell, D. T.

    2012-01-01

    The Absolute Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  7. Orion Absolute Navigation System Progress and Challenge

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  8. Absolute determination of local tropospheric OH concentrations

    NASA Technical Reports Server (NTRS)

    Armerding, Wolfgang; Comes, Franz-Josef

    1994-01-01

    Long path absorption (LPA) according to Lambert Beer's law is a method to determine absolute concentrations of trace gases such as tropospheric OH. We have developed a LPA instrument which is based on a rapid tuning of the light source which is a frequency doubled dye laser. The laser is tuned across two or three OH absorption features around 308 nm with a scanning speed of 0.07 cm(exp -1)/microsecond and a repetition rate of 1.3 kHz. This high scanning speed greatly reduces the fluctuation of the light intensity caused by the atmosphere. To obtain the required high sensitivity the laser output power is additionally made constant and stabilized by an electro-optical modulator. The present sensitivity is of the order of a few times 10(exp 5) OH per cm(exp 3) for an acquisition time of a minute and an absorption path length of only 1200 meters so that a folding of the optical path in a multireflection cell was possible leading to a lateral dimension of the cell of a few meters. This allows local measurements to be made. Tropospheric measurements have been carried out in 1991 resulting in the determination of OH diurnal variation at specific days in late summer. Comparison with model calculations have been made. Interferences are mainly due to SO2 absorption. The problem of OH self generation in the multireflection cell is of minor extent. This could be shown by using different experimental methods. The minimum-maximum signal to noise ratio is about 8 x 10(exp -4) for a single scan. Due to the small size of the absorption cell the realization of an open air laboratory is possible in which by use of an additional UV light source or by additional fluxes of trace gases the chemistry can be changed under controlled conditions allowing kinetic studies of tropospheric photochemistry to be made in open air.

  9. An Analysis of the Effect on the Data Processing of Korea GPS Network by the Absolute Phase Center Variations of GPS Antenna

    NASA Astrophysics Data System (ADS)

    Baek, Jeongho; Lim, Hyung-Chul; Jo, Jung Hyun; Cho, Sungki; Cho, Jung-Ho

    2006-12-01

    The International GNSS Service (IGS) has prepared for a transition from the relative phase center variation (PCV) to the absolute PCV, because the terrestrial scale problem of the absolute PCV was resolved by estimating the PCV of the GPS satellites. Thus, the GPS data will be processed by using the absolute PCV which will be an IGS standard model in the near future. It is necessary to compare and analyze the results between the relative PCV and the absolute PCV for the establishment of the reliable processing strategy. This research analyzes the effect caused by the absolute PCV via the GPS network data processing. First, the four IGS stations, Daejeon, Suwon, Beijing and Wuhan, are selected to make longer baselines than 1000 km, and processed by using the relative PCV and the absolute PCV to examine the effect of the antenna raydome. Beijing and Wuhan stations of which the length of baselines are longer than 1000 km show the average difference of 1.33 cm in the vertical! component, and 2.97 cm when the antenna raydomes are considered. Second, the 7 permanent GPS stations among the total 9 stations, operated by Korea Astronomy and Space Science Institute, are processed by applying the relative PCV and the absolute PCV, and their results are compared and analyzed. An insignificant effect of the absolute PCV is shown in Korea regional network with the average difference of 0.12 cm in the vertical component.

  10. New identification method for Hammerstein models based on approximate least absolute deviation

    NASA Astrophysics Data System (ADS)

    Xu, Bao-Chang; Zhang, Ying-Dan

    2016-07-01

    Disorder and peak noises or large disturbances can deteriorate the identification effects of Hammerstein non-linear models when using the least-square (LS) method. The least absolute deviation technique can be used to resolve this problem; however, its absolute value cannot meet the need of differentiability required by most algorithms. To improve robustness and resolve the non-differentiable problem, an approximate least absolute deviation (ALAD) objective function is established by introducing a deterministic function that exhibits the characteristics of absolute value under certain situations. A new identification method for Hammerstein models based on ALAD is thus developed in this paper. The basic idea of this method is to apply the stochastic approximation theory in the process of deriving the recursive equations. After identifying the parameter matrix of the Hammerstein model via the new algorithm, the product terms in the matrix are separated by calculating the average values. Finally, algorithm convergence is proven by applying the ordinary differential equation method. The proposed algorithm has a better robustness as compared to other LS methods, particularly when abnormal points exist in the measured data. Furthermore, the proposed algorithm is easier to apply and converges faster. The simulation results demonstrate the efficacy of the proposed algorithm.

  11. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  12. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  13. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  14. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  15. Measurement of the Absolute Branching Fraction of D0 to K- pi+

    SciTech Connect

    Aubert, B.; Bona, M.; Boutigny, D.; Karyotakis, Y.; Lees, J.P.; Poireau, V.; Prudent, X.; Tisserand, V.; Zghiche, A.; Garra Tico, J.; Grauges, E.; Lopez, L.; Palano, A.; Eigen, G.; Ofte, I.; Stugu, B.; Sun, L.; Abrams, G.S.; Battaglia, M.; Brown, D.N.; Button-Shafer, J.; /LBL, Berkeley /Birmingham U. /Ruhr U., Bochum /Bristol U. /British Columbia U. /Brunel U. /Novosibirsk, IYF /UC, Irvine /UCLA /UC, Riverside /UC, San Diego /UC, Santa Barbara /UC, Santa Cruz /Caltech /Cincinnati U. /Colorado U. /Colorado State U. /Dortmund U. /Munich, Tech. U. /Ecole Polytechnique /Edinburgh U. /Ferrara U. /Frascati /Genoa U. /Harvard U. /Heidelberg U. /Imperial Coll., London /Iowa U. /Iowa State U. /Johns Hopkins U. /Karlsruhe U. /Orsay, LAL /LLNL, Livermore /Liverpool U. /Queen Mary, U. of London /Royal Holloway, U. of London /Louisville U. /Manchester U. /Maryland U. /Massachusetts U., Amherst /MIT, LNS /McGill U. /Maryland U. /INFN, Milan /Mississippi U. /Montreal U. /Mt. Holyoke Coll. /Naples U. /NIKHEF, Amsterdam /Notre Dame U. /Ohio State U. /Oregon U. /Padua U. /Paris U., VI-VII /Pennsylvania U. /Perugia U. /Pisa U. /Prairie View A-M /Princeton U. /INFN, Rome /Rostock U. /Rutherford /DSM, DAPNIA, Saclay /South Carolina U. /SLAC /Stanford U., Phys. Dept. /SUNY, Albany /Tennessee U. /Texas U. /Texas U., Dallas /Turin U. /Trieste U. /Valencia U., IFIC /Victoria U. /Warwick U. /Wisconsin U., Madison /Yale U.

    2007-04-25

    The authors measure the absolute branching fraction for D{sup 0} {yields} K{sup -} {pi}{sup +} using partial reconstruction of {bar B}{sup 0} {yields} D*{sup +}X{ell}{sup -}{bar {nu}}{sub {ell}} decays, in which only the charged lepton and the pion from the decay D*{sup +} {yields} D{sup 0}{pi}{sup +} are used. Based on a data sample of 230 million B{bar B} pairs collected at the {Upsilon}(4S) resonance with the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC, they obtain {Beta}(D{sup 0} {yields} K{sup -}{pi}{sup +}) = (4.007 {+-} 0.037 {+-} 0.070)%, where the first error is statistical and the second error is systematic.

  16. Mid-infrared absolute spectral responsivity scale based on an absolute cryogenic radiometer and an optical parametric oscillator laser

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Shi, Xueshun; Chen, Haidong; Liu, Yulong; Liu, Changming; Chen, Kunfeng; Li, Ligong; Gan, Haiyong; Ma, Chong

    2016-06-01

    We are reporting on a laser-based absolute spectral responsivity scale in the mid-infrared spectral range. By using a mid-infrared tunable optical parametric oscillator as the laser source, the absolute responsivity scale has been established by calibrating thin-film thermopile detectors against an absolute cryogenic radiometer. The thin-film thermopile detectors can be then used as transfer standard detectors. The extended uncertainty of the absolute spectral responsivity measurement has been analyzed to be 0.58%–0.68% (k  =  2).

  17. Mid-infrared absolute spectral responsivity scale based on an absolute cryogenic radiometer and an optical parametric oscillator laser

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Shi, Xueshun; Chen, Haidong; Liu, Yulong; Liu, Changming; Chen, Kunfeng; Li, Ligong; Gan, Haiyong; Ma, Chong

    2016-06-01

    We are reporting on a laser-based absolute spectral responsivity scale in the mid-infrared spectral range. By using a mid-infrared tunable optical parametric oscillator as the laser source, the absolute responsivity scale has been established by calibrating thin-film thermopile detectors against an absolute cryogenic radiometer. The thin-film thermopile detectors can be then used as transfer standard detectors. The extended uncertainty of the absolute spectral responsivity measurement has been analyzed to be 0.58%-0.68% (k  =  2).

  18. Absolute Thermal SST Measurements over the Deepwater Horizon Oil Spill

    NASA Astrophysics Data System (ADS)

    Good, W. S.; Warden, R.; Kaptchen, P. F.; Finch, T.; Emery, W. J.

    2010-12-01

    Climate monitoring and natural disaster rapid assessment require baseline measurements that can be tracked over time to distinguish anthropogenic versus natural changes to the Earth system. Disasters like the Deepwater Horizon Oil Spill require constant monitoring to assess the potential environmental and economic impacts. Absolute calibration and validation of Earth-observing sensors is needed to allow for comparison of temporally separated data sets and provide accurate information to policy makers. The Ball Experimental Sea Surface Temperature (BESST) radiometer was designed and built by Ball Aerospace to provide a well calibrated measure of sea surface temperature (SST) from an unmanned aerial system (UAS). Currently, emissive skin SST observed by satellite infrared radiometers is validated by shipborne instruments that are expensive to deploy and can only take a few data samples along the ship track to overlap within a single satellite pixel. Implementation on a UAS will allow BESST to map the full footprint of a satellite pixel and perform averaging to remove any local variability due to the difference in footprint size of the instruments. It also enables the capability to study this sub-pixel variability to determine if smaller scale effects need to be accounted for in models to improve forecasting of ocean events. In addition to satellite sensor validation, BESST can distinguish meter scale variations in SST which could be used to remotely monitor and assess thermal pollution in rivers and coastal areas as well as study diurnal and seasonal changes to bodies of water that impact the ocean ecosystem. BESST was recently deployed on a conventional Twin Otter airplane for measurements over the Gulf of Mexico to access the thermal properties of the ocean surface being affected by the oil spill. Results of these measurements will be presented along with ancillary sensor data used to eliminate false signals including UV and Synthetic Aperture Radar (SAR

  19. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  20. Tackling modelling error in the application of electrical impedance tomography to the head.

    PubMed

    Ouypornkochagorn, Taweechai; McCann, Hugh; Polydorides, Nick

    2015-08-01

    In the head application of Electrical Impedance Tomography (EIT), reconstruction of voltage measurements for a conductivity distribution image using an ordinary method, the absolute imaging approach, is impossible due to the traditional ignorance of modelling error. The modelling error comes from the inaccuracy of geometry and structure, which are unable to be known accurately in practice, and are usually large in head application of EIT. Difference imaging is an alternative approach which is able to reduce the size of this error, but it introduces other kinds of error. In this work, we demonstrate that in situations like head EIT, the nonlinear difference imaging approach can reconstruct difference conductivity effectively: the reduced modelling error and the new errors arising are able to be ignored, because they are much smaller than the original modelling error. The magnitude of conductivity change in the head-like situation is also investigated, and a selection scheme for the initial guess in the reconstruction process is also proposed.

  1. Spelling in adolescents with dyslexia: errors and modes of assessment.

    PubMed

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia.

  2. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-01

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.

  3. Novalis' Poetic Uncertainty: A "Bildung" with the Absolute

    ERIC Educational Resources Information Center

    Mika, Carl

    2016-01-01

    Novalis, the Early German Romantic poet and philosopher, had at the core of his work a mysterious depiction of the "absolute." The absolute is Novalis' name for a substance that defies precise knowledge yet calls for a tentative and sensitive speculation. How one asserts a truth, represents an object, and sets about encountering things…

  4. Absolute Pitch in Infant Auditory Learning: Evidence for Developmental Reorganization.

    ERIC Educational Resources Information Center

    Saffran, Jenny R.; Griepentrog, Gregory J.

    2001-01-01

    Two experiments examined 8-month-olds' use of absolute and relative pitch cues in a tone-sequence statistical learning task. Results suggest that, given unsegmented stimuli that do not conform to rules of musical composition, infants are more likely to track patterns of absolute pitches than of relative pitches. A third experiment found that adult…

  5. Supplementary and Enrichment Series: Absolute Value. Teachers' Commentary. SP-25.

    ERIC Educational Resources Information Center

    Bridgess, M. Philbrick, Ed.

    This is one in a series of manuals for teachers using SMSG high school supplementary materials. The pamphlet includes commentaries on the sections of the student's booklet, answers to the exercises, and sample test questions. Topics covered include addition and multiplication in terms of absolute value, graphs of absolute value in the Cartesian…

  6. Supplementary and Enrichment Series: Absolute Value. SP-24.

    ERIC Educational Resources Information Center

    Bridgess, M. Philbrick, Ed.

    This is one in a series of SMSG supplementary and enrichment pamphlets for high school students. This series is designed to make material for the study of topics of special interest to students readily accessible in classroom quantity. Topics covered include absolute value, addition and multiplication in terms of absolute value, graphs of absolute…

  7. Absolute dimensions of unevolved O type close binaries

    SciTech Connect

    Doom, C.; de Loore, C.

    1984-03-15

    A method is presented to derive the absolute dimensions of early-type detached binaries by combining the observed parameters with results of evolutionary computations. The method is used to obtain the absolute dimensions of nine close binaries. We find that most systems have an initial masss ratio near 1.

  8. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  9. Determination of Absolute Zero Using a Computer-Based Laboratory

    ERIC Educational Resources Information Center

    Amrani, D.

    2007-01-01

    We present a simple computer-based laboratory experiment for evaluating absolute zero in degrees Celsius, which can be performed in college and undergraduate physical sciences laboratory courses. With a computer, absolute zero apparatus can help demonstrators or students to observe the relationship between temperature and pressure and use…

  10. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  11. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  12. [Dealing with errors in medicine].

    PubMed

    Schoenenberger, R A; Perruchoud, A P

    1998-12-24

    Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.

  13. Absolute and relative emissions analysis in practical combustion systems—effect of water vapor condensation

    NASA Astrophysics Data System (ADS)

    Richter, J. P.; Mollendorf, J. C.; DesJardin, P. E.

    2016-11-01

    Accurate knowledge of the absolute combustion gas composition is necessary in the automotive, aircraft, processing, heating and air conditioning industries where emissions reduction is a major concern. Those industries use a variety of sensor technologies. Many of these sensors are used to analyze the gas by pumping a sample through a system of tubes to reach a remote sensor location. An inherent characteristic with this type of sampling strategy is that the mixture state changes as the sample is drawn towards the sensor. Specifically, temperature and humidity changes can be significant, resulting in a very different gas mixture at the sensor interface compared with the in situ location (water vapor dilution effect). Consequently, the gas concentrations obtained from remotely sampled gas analyzers can be significantly different than in situ values. In this study, inherent errors associated with sampled combustion gas concentration measurements are explored, and a correction methodology is presented to determine the absolute gas composition from remotely measured gas species concentrations. For in situ (wet) measurements a heated zirconium dioxide (ZrO2) oxygen sensor (Bosch LSU 4.9) is used to measure the absolute oxygen concentration. This is used to correct the remotely sampled (dry) measurements taken with an electrochemical sensor within the remote analyzer (Testo 330-2LL). In this study, such a correction is experimentally validated for a specified concentration of carbon monoxide (5020 ppmv).

  14. Absolute astrometry with Pan-STARRS

    NASA Astrophysics Data System (ADS)

    Makarov, Valeri; Berghea, Ciprian; Dorland, Bryan; Hennessy, Greg; Zacharias, Norbert; Magnier, Eugene A.; Monet, David; Gaume, Ralph

    2015-08-01

    A small collaboration of USNO and IfA astronomers is working on an improved astrometric solution for the data collected by the Pan-STARRS project. The 3PI survey performed by the PS1 telescope is well suited for a global astrometric solution. The current approach used in the data reduction pipeline is strictly differential. The 2MASS positions were used as reference for field of view (FoV) and detector calibration procedures. The absence of proper motions in 2MASS results in significant sky-correlated errors up to 30 - 50 mas. Our approach is to solve a huge system of linear equations for a carefully selected set of ~1 million grid objects including the astrometric unknowns (positions, proper motions and parallaxes) and FoV calibration parameters. The grid catalog includes ~5000 extragalactic radio sources with VLBI-detected positions accurate to 1 mas or better, which are used as hard constraints to the astrometric unknowns in the global least-squares adjustment. If successful, this will be the first realization of a large optical astrometry catalog directly anchored to the ICRF. Numerical simulations indicated a 10 mas accuracy level for Pan-STARRS astrometry, but experimental solutions on real data have not yet reached this level.

  15. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  16. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  17. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering.

    PubMed

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-05-23

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.

  18. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    PubMed Central

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  19. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering.

    PubMed

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  20. Reducing nurse medicine administration errors.

    PubMed

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  1. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  2. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  3. Discrete models of fluids: spatial averaging, closure and model reduction

    SciTech Connect

    Panchenko, Alexander; Tartakovsky, Alexandre M.; Cooper, Kevin

    2014-04-15

    We consider semidiscrete ODE models of single-phase fluids and two-fluid mixtures. In the presence of multiple fine-scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy exact balance equations of mass, momentum, and energy. These equations do not form a satisfactory continuum model because evaluation of stress and heat flux requires solving the underlying ODEs. To produce continuum equations that can be simulated without resolving microscale dynamics, we recently proposed a closure method based on the use of regularized deconvolution. Here we continue the investigation of deconvolution closure with the long term objective of developing consistent computational upscaling for multiphase particle methods. The structure of the fine-scale particle solvers is reminiscent of molecular dynamics. For this reason we use nonlinear averaging introduced for atomistic systems by Noll, Hardy, and Murdoch-Bedeaux. We also consider a simpler linear averaging originally developed in large eddy simulation of turbulence. We present several simple but representative examples of spatially averaged ODEs, where the closure error can be analyzed. Based on this analysis we suggest a general strategy for reducing the relative error of approximate closure. For problems with periodic highly oscillatory material parameters we propose a spectral boosting technique that augments the standard deconvolution and helps to correctly account for dispersion effects. We also conduct several numerical experiments, one of which is a complete mesoscale simulation of a stratified two-fluid flow in a channel. In this simulation, the operation count per coarse time step scales sublinearly with the number of particles.

  4. Absolute frequency measurement at 10-16 level based on the international atomic time

    NASA Astrophysics Data System (ADS)

    Hachisu, H.; Fujieda, M.; Kumagai, M.; Ido, T.

    2016-06-01

    Referring to International Atomic Time (TAI), we measured the absolute frequency of the 87Sr lattice clock with its uncertainty of 1.1 x 10-15. Unless an optical clock is continuously operated for the five days of the TAI grid, it is required to evaluate dead time uncertainty in order to use the available five-day average of the local frequency reference. We homogeneously distributed intermittent measurements over the five-day grid of TAI, by which the dead time uncertainty was reduced to low 10-16 level. Three campaigns of the five (or four)-day consecutive measurements have resulted in the absolute frequency of the 87Sr clock transition of 429 228 004 229 872.85 (47) Hz, where the systematic uncertainty of the 87Sr optical frequency standard amounts to 8.6 x 10-17.

  5. A Liquid-Helium-Cooled Absolute Reference Cold Load forLong-Wavelength Radiometric Calibration

    SciTech Connect

    Bensadoun, M.; Witebsky, C.; Smoot, George F.; De Amici,Giovanni; Kogut, A.; Levin, S.

    1990-05-01

    We describe a large (78-cm) diameter liquid-helium-cooled black-body absolute reference cold load for the calibration of microwave radiometers. The load provides an absolute calibration near the liquid helium (LHe) boiling point, accurate to better than 30 mK for wavelengths from 2.5 to 25 cm (12-1.2 GHz). The emission (from non-LHe temperature parts of the cold load) and reflection are small and well determined. Total corrections to the LHe boiling point temperature are {le} 50 mK over the operating range. This cold load has been used at several wavelengths at the South Pole and at the White Mountain Research Station. In operation, the average LHe loss rate was {le} 4.4 l/hr. Design considerations, radiometric and thermal performance and operational aspects are discussed. A comparison with other LHe-cooled reference loads including the predecessor of this cold load is given.

  6. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  7. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  8. Absolute localization of ground robots by matching LiDAR and image data in dense forested environments

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan; Renner, Matthew; Iagnemma, Karl

    2014-06-01

    A method for the autonomous geolocation of ground vehicles in forest environments is discussed. The method provides an estimate of the global horizontal position of a vehicle strictly based on finding a geometric match between a map of observed tree stems, scanned in 3D by Light Detection and Ranging (LiDAR) sensors onboard the vehicle, to another stem map generated from the structure of tree crowns analyzed from high resolution aerial orthoimagery of the forest canopy. Extraction of stems from 3D data is achieved by using Support Vector Machine (SVM) classifiers and height above ground filters that separate ground points from vertical stem features. Identification of stems from overhead imagery is achieved by finding the centroids of tree crowns extracted using a watershed segmentation algorithm. Matching of the two maps is achieved by using a robust Iterative Closest Point (ICP) algorithm that determines the rotation and translation vectors to align the datasets. The alignment is used to calculate the absolute horizontal location of the vehicle. The method has been tested with real-world data and has been able to estimate vehicle geoposition with an average error of less than 2 m. It is noted that the algorithm's accuracy performance is currently limited by the accuracy and resolution of aerial orthoimagery used. The method can be used in real-time as a complement to the Global Positioning System (GPS) in areas where signal coverage is inadequate due to attenuation by the forest canopy, or due to intentional denied access. The method has two key properties that are significant: i) It does not require a priori knowledge of the area surrounding the robot. ii) Uses the geometry of detected tree stems as the only input to determine horizontal geoposition.

  9. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  10. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  11. Effect of Body Mass Index on Magnitude of Setup Errors in Patients Treated With Adjuvant Radiotherapy for Endometrial Cancer With Daily Image Guidance

    SciTech Connect

    Lin, Lilie L.; Hertan, Lauren; Rengan, Ramesh; Teo, Boon-Keng Kevin

    2012-06-01

    Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed. To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.

  12. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  13. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  14. The predicted CLARREO sampling error of the inter-annual SW variability

    NASA Astrophysics Data System (ADS)

    Doelling, D. R.; Keyes, D. F.; Nguyen, C.; Macdonnell, D.; Young, D. F.

    2009-12-01

    The NRC Decadal Survey has called for SI traceability of long-term hyper-spectral flux measurements in order to monitor climate variability. This mission is called the Climate Absolute Radiance and Refractivity Observatory (CLARREO) and is currently defining its mission requirements. The requirements are focused on the ability to measure decadal change of key climate variables at very high accuracy. The accuracy goals are set using anticipated climate change magnitudes, but the accuracy achieved for any given climate variable must take into account the temporal and spatial sampling errors based on satellite orbits and calibration accuracy. The time period to detect a significant trend in the CLARREO record depends on the magnitude of the sampling calibration errors relative to the current inter-annual variability. The largest uncertainty in climate feedbacks remains the effect of changing clouds on planetary energy balance. Some regions on earth have strong diurnal cycles, such as maritime stratus and afternoon land convection; other regions have strong seasonal cycles, such as the monsoon. However, when monitoring inter-annual variability these cycles are only important if the strength of these cycles vary on decadal time scales. This study will attempt to determine the best satellite constellations to reduce sampling error and to compare the error with the current inter-annual variability signal to ensure the viability of the mission. The study will incorporate Clouds and the Earth's Radiant Energy System (CERES) (Monthly TOA/Surface Averages) SRBAVG product TOA LW and SW climate quality fluxes. The fluxes are derived by combining Terra (10:30 local equator crossing time) CERES fluxes with 3-hourly 5-geostationary satellite estimated broadband fluxes, which are normalized using the CERES fluxes, to complete the diurnal cycle. These fluxes were saved hourly during processing and considered the truth dataset. 90°, 83° and 74° inclination precessionary orbits as

  15. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  16. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  17. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  18. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  19. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  20. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  1. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  2. Uranium isotopic composition and absolute ages of Allende chondrules

    NASA Astrophysics Data System (ADS)

    Brennecka, G. A.; Budde, G.; Kleine, T.

    2015-11-01

    A handful of events, such as the condensation of refractory inclusions and the formation of chondrules, represent important stages in the formation and evolution of the early solar system and thus are critical to understanding its development. Compared to the refractory inclusions, chondrules appear to have a protracted period of formation that spans millions of years. As such, understanding chondrule formation requires a catalog of reliable ages, free from as many assumptions as possible. The Pb-Pb chronometer has this potential; however, because common individual chondrules have extremely low uranium contents, obtaining U-corrected Pb-Pb ages of individual chondrules is unrealistic in the vast majority of cases at this time. Thus, in order to obtain the most accurate 238U/235U ratio possible for chondrules, we separated and pooled thousands of individual chondrules from the Allende meteorite. In this work, we demonstrate that no discernible differences exist in the 238U/235U compositions between chondrule groups when separated by size and magnetic susceptibility, suggesting that no systematic U-isotope variation exists between groups of chondrules. Consequently, chondrules are likely to have a common 238U/235U ratio for any given meteorite. A weighted average of the six groups of chondrule separates from Allende results in a 238U/235U ratio of 137.786 ± 0.004 (±0.016 including propagated uncertainty on the U standard [Richter et al. 2010]). Although it is still possible that individual chondrules have significant U isotope variation within a given meteorite, this value represents our best estimate of the 238U/235U ratio for Allende chondrules and should be used for absolute dating of these objects, unless such chondrules can be measured individually.

  3. Development of a graphite probe calorimeter for absolute clinical dosimetry.

    PubMed

    Renaud, James; Marchington, David; Seuntjens, Jan; Sarfehnia, Arman

    2013-02-01

    The aim of this work is to present the numerical design optimization, construction, and experimental proof of concept of a graphite probe calorimeter (GPC) conceived for dose measurement in the clinical environment (U.S. provisional patent 61∕652,540). A finite element method (FEM) based numerical heat transfer study was conducted using a commercial software package to explore the feasibility of the GPC and to optimize the shape, dimensions, and materials used in its design. A functioning prototype was constructed inhouse and used to perform dose to water measurements under a 6 MV photon beam at 400 and 1000 MU∕min, in a thermally insulated water phantom. Heat loss correction factors were determined using FEM analysis while the radiation field perturbation and the graphite to water absorbed dose conversion factors were calculated using Monte Carlo simulations. The difference in the average measured dose to water for the 400 and 1000 MU∕min runs using the TG-51 protocol and the GPC was 0.2% and 1.2%, respectively. Heat loss correction factors ranged from 1.001 to 1.002, while the product of the perturbation and dose conversion factors was calculated to be 1.130. The combined relative uncertainty was estimated to be 1.4%, with the largest contributors being the specific heat capacity of the graphite (type B, 0.8%) and the reproducibility, defined as the standard deviation of the mean measured dose (type A, 0.6%). By establishing the feasibility of using the GPC as a practical clinical absolute photon dosimeter, this work lays the foundation for further device enhancements, including the development of an isothermal mode of operation and an overall miniaturization, making it potentially suitable for use in small and composite radiation fields. It is anticipated that, through the incorporation of isothermal stabilization provided by temperature controllers, a subpercent overall uncertainty will be achieved.

  4. Extraordinary floods in early Chinese history and their absolute dates

    NASA Astrophysics Data System (ADS)

    Pang, Kevin D.

    1987-12-01

    The earliest extraordinary floods recorded in Chinese historical texts occurred shortly before the beginning of Xia, the first hereditary dynasty in China. Yu, the founder of Xia, is credited with having successfully controlled these floods. Three different methods have been applied here to absolutely date these events, using royal genealogies, and records of an ancient solar eclipse and a planetary conjunction. The genealogies of the predynastic Shang lords and dynastic Shang and Zhou kings, which have been confirmed by archeological data, have been used to calibrate the parallel but not yet confirmed Xia royal genealogy. Using 30 years as an average time interval between two generations and backtracking from known endpoints the beginning of the Xia dynasty was determined to be not earlier than 20th century B.C. Dating of a recorded solar eclipse placed the 5th year of the 4th Xia king at 1876 B.C. Textual records of the 1953 B.C. five-planet conjunction have been found, and the event was shown to have occurred in the lifetime of King Yu. The evidence taken together suggests that the Xia dynasty began in the middle of the 20th century B.C., and the extraordinary floods during the reigns of the sage kings Yao and Shun occurred shortly before that, i.e., in the first half of the 20th century B.C. Radiocarbon dates from the Erlitou and Gaocheng cultures, generally believed to be Xia cultures, are consistent with the results reported here. In view of this analysis and recent archeological discoveries the traditional dates for the beginning of Xia and the earliest-recorded extraordinary floods require drastic revision.

  5. Development of a graphite probe calorimeter for absolute clinical dosimetry

    SciTech Connect

    Renaud, James; Seuntjens, Jan; Sarfehnia, Arman; Marchington, David

    2013-02-15

    The aim of this work is to present the numerical design optimization, construction, and experimental proof of concept of a graphite probe calorimeter (GPC) conceived for dose measurement in the clinical environment (U.S. provisional patent 61/652,540). A finite element method (FEM) based numerical heat transfer study was conducted using a commercial software package to explore the feasibility of the GPC and to optimize the shape, dimensions, and materials used in its design. A functioning prototype was constructed inhouse and used to perform dose to water measurements under a 6 MV photon beam at 400 and 1000 MU/min, in a thermally insulated water phantom. Heat loss correction factors were determined using FEM analysis while the radiation field perturbation and the graphite to water absorbed dose conversion factors were calculated using Monte Carlo simulations. The difference in the average measured dose to water for the 400 and 1000 MU/min runs using the TG-51 protocol and the GPC was 0.2% and 1.2%, respectively. Heat loss correction factors ranged from 1.001 to 1.002, while the product of the perturbation and dose conversion factors was calculated to be 1.130. The combined relative uncertainty was estimated to be 1.4%, with the largest contributors being the specific heat capacity of the graphite (type B, 0.8%) and the reproducibility, defined as the standard deviation of the mean measured dose (type A, 0.6%). By establishing the feasibility of using the GPC as a practical clinical absolute photon dosimeter, this work lays the foundation for further device enhancements, including the development of an isothermal mode of operation and an overall miniaturization, making it potentially suitable for use in small and composite radiation fields. It is anticipated that, through the incorporation of isothermal stabilization provided by temperature controllers, a subpercent overall uncertainty will be achieved.

  6. Fabrication of capacitive absolute pressure sensors by thin film vacuum encapsulation on SOI substrates

    NASA Astrophysics Data System (ADS)

    Belsito, Luca; Mancarella, Fulvio; Roncaglia, Alberto

    2016-09-01

    The paper reports on the fabrication and characterization of absolute capacitive pressure sensors fabricated by polysilicon low-pressure chemical vapour deposition vacuum packaging on silicon-on-insulator substrates. The fabrication process proposed is carried out at wafer level and allows obtaining a large number of miniaturized sensors per substrate on 1  ×  2 mm2 chips with high yield. The sensors present average pressure sensitivity of 8.3 pF/bar and average pressure resolution limit of 0.24 mbar within the measurement range 200–1200 mbar. The temperature drift of the sensor prototypes was also measured in the temperature range 25–45 °C, yielding an average temperature sensitivity of 67 fF K‑1 at ambient pressure.

  7. Fabrication of capacitive absolute pressure sensors by thin film vacuum encapsulation on SOI substrates

    NASA Astrophysics Data System (ADS)

    Belsito, Luca; Mancarella, Fulvio; Roncaglia, Alberto

    2016-09-01

    The paper reports on the fabrication and characterization of absolute capacitive pressure sensors fabricated by polysilicon low-pressure chemical vapour deposition vacuum packaging on silicon-on-insulator substrates. The fabrication process proposed is carried out at wafer level and allows obtaining a large number of miniaturized sensors per substrate on 1  ×  2 mm2 chips with high yield. The sensors present average pressure sensitivity of 8.3 pF/bar and average pressure resolution limit of 0.24 mbar within the measurement range 200-1200 mbar. The temperature drift of the sensor prototypes was also measured in the temperature range 25-45 °C, yielding an average temperature sensitivity of 67 fF K-1 at ambient pressure.

  8. Mini-implants and miniplates generate sub-absolute and absolute anchorage

    PubMed Central

    Consolaro, Alberto

    2014-01-01

    The functional demand imposed on bone promotes changes in the spatial properties of osteocytes as well as in their extensions uniformly distributed throughout the mineralized surface. Once spatial deformation is established, osteocytes create the need for structural adaptations that result in bone formation and resorption that happen to meet the functional demands. The endosteum and the periosteum are the effectors responsible for stimulating adaptive osteocytes in the inner and outer surfaces.Changes in shape, volume and position of the jaws as a result of skeletal correction of the maxilla and mandible require anchorage to allow bone remodeling to redefine morphology, esthetics and function as a result of spatial deformation conducted by orthodontic appliances. Examining the degree of changes in shape, volume and structural relationship of areas where mini-implants and miniplates are placed allows us to classify mini-implants as devices of subabsolute anchorage and miniplates as devices of absolute anchorage. PMID:25162561

  9. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  10. Errors in potassium balance

    SciTech Connect

    Forbes, G.B.; Lantigua, R.; Amatruda, J.M.; Lockwood, D.H.

    1981-01-01

    Six overweight adult subjects given a low calorie diet containing adequate amounts of nitrogen but subnormal amounts of potassium (K) were observed on the Clinical Research Center for periods of 29 to 40 days. Metabolic balance of potassium was measured together with frequent assays of total body K by /sup 40/K counting. Metabolic K balance underestimated body K losses by 11 to 87% (average 43%): the intersubject variability is such as to preclude the use of a single correction value for unmeasured losses in K balance studies.

  11. An All Fiber White Light Interferometric Absolute Temperature Measurement System

    PubMed Central

    Kim, Jeonggon Harrison

    2008-01-01

    Recently the author of this article proposed a new signal processing algorithm for an all fiber white light interferometer. In this article, an all fiber white light interferometric absolute temperature measurement system is presented using the previously proposed signal processing algorithm. Stability and absolute temperature measurement were demonstrated. These two tests demonstrated the feasibility of absolute temperature measurement with an accuracy of 0.015 fringe and 0.0005 fringe, respectively. A hysteresis test from 373K to 873K was also presented. Finally, robustness of the sensor system towards laser diode temperature drift, AFMZI temperature drift and PZT non-linearity was demonstrated.

  12. Measurement of Disintegration Rates and Absolute {gamma}-ray Intensities

    SciTech Connect

    DeVries, Daniel J.; Griffin, Henry C.

    2006-03-13

    The majority of practical radioactive materials decay by modes that include {gamma}-ray emission. For questions of 'how much' or 'how pure', one must know the absolute intensities of the major radiations. We are using liquid scintillation counting (LSC) to measurements of disintegration rates, coupled with {gamma}-ray spectroscopy to measure absolute {gamma}-ray emission probabilities. Described is a study of the 227Th chain yielding absolute {gamma}-ray intensities with {approx}0.5% accuracy and information on LSC efficiencies.

  13. Absolute Antenna Calibration at the US National Geodetic Survey

    NASA Astrophysics Data System (ADS)

    Mader, G. L.; Bilich, A. L.

    2012-12-01

    Geodetic GNSS applications routinely demand millimeter precision and extremely high levels of accuracy. To achieve these accuracies, measurement and instrument biases at the centimeter to millimeter level must be understood. One of these biases is the antenna phase center, the apparent point of signal reception for a GNSS antenna. It has been well established that phase center patterns differ between antenna models and manufacturers; additional research suggests that the addition of a radome or the choice of antenna mount can significantly alter those a priori phase center patterns. For the more demanding GNSS positioning applications and especially in cases of mixed-antenna networks, it is all the more important to know antenna phase center variations as a function of both elevation and azimuth in the antenna reference frame and incorporate these models into analysis software. Determination of antenna phase center behavior is known as "antenna calibration". Since 1994, NGS has computed relative antenna calibrations for more than 350 antennas. In recent years, the geodetic community has moved to absolute calibrations - the IGS adopted absolute antenna phase center calibrations in 2006 for use in their orbit and clock products, and NGS's CORS group began using absolute antenna calibration upon the release of the new CORS coordinates in IGS08 epoch 2005.00 and NAD 83(2011,MA11,PA11) epoch 2010.00. Although NGS relative calibrations can be and have been converted to absolute, it is considered best practice to independently measure phase center characteristics in an absolute sense. Consequently, NGS has developed and operates an absolute calibration system. These absolute antenna calibrations accommodate the demand for greater accuracy and for 2-dimensional (elevation and azimuth) parameterization. NGS will continue to provide calibration values via the NGS web site www.ngs.noaa.gov/ANTCAL, and will publish calibrations in the ANTEX format as well as the legacy ANTINFO

  14. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  15. Dynamic and static error analyses of neutron radiography testing

    SciTech Connect

    Joo, H.; Glickstein, S.S.

    1999-03-01

    Neutron radiography systems are being used for real-time visualization of the dynamic behavior as well as time-averaged measurements of spatial vapor fraction distributions for two phase fluids. The data in the form of video images are typically recorded on videotape at 30 frames per second. Image analysis of he video pictures is used to extract time-dependent or time-averaged data. The determination of the average vapor fraction requires averaging of the logarithm of time-dependent intensity measurements of the neutron beam (gray scale distribution of the image) that passes through the fluid. This could be significantly different than averaging the intensity of the transmitted beam and then taking the logarithm of that term. This difference is termed the dynamic error (error in the time-averaged vapor fractions due to the inherent time-dependence of the measured data) and is separate from the static error (statistical sampling uncertainty). Detailed analyses of both sources of errors are discussed.

  16. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  17. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  18. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  19. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  20. Sepsis: Medical errors in Poland.

    PubMed

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.

  1. Sampling errors in satellite estimates of tropical rain

    NASA Technical Reports Server (NTRS)

    Mcconnell, Alan; North, Gerald R.

    1987-01-01

    The GATE rainfall data set is used in a statistical study to estimate the sampling errors that might be expected for the type of snapshot sampling that a low earth-orbiting satellite makes. For averages over the entire 400-km square and for the duration of several weeks, strong evidence is found that sampling errors less than 10 percent can be expected in contributions from each of four rain rate categories which individually account for about one quarter of the total rain.

  2. Error probabilities in optical PPM receivers with Gaussian mixture densities

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1982-01-01

    A Gaussian mixture density arises when a discrete variable (e.g., a photodetector count variable) is added to a continuous Gaussian variable (e.g., thermal noise). Making use of some properties of photomultiplier Gaussian mixture distributions, some approximate error probability formulas can be derived. These appear as averages of M-ary orthogonal Gaussian error probabilities. The use of a pure Gaussian assumption is considered, and when properly defined, appears as an accurate upper bound to performance.

  3. Aerial measurement error with a dot planimeter: Some experimental estimates

    NASA Technical Reports Server (NTRS)

    Yuill, R. S.

    1971-01-01

    A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

  4. Absolute pitch in infant auditory learning: evidence for developmental reorganization.

    PubMed

    Saffran, J R; Griepentrog, G J

    2001-01-01

    To what extent do infants represent the absolute pitches of complex auditory stimuli? Two experiments with 8-month-old infants examined the use of absolute and relative pitch cues in a tone-sequence statistical learning task. The results suggest that, given unsegmented stimuli that do not conform to the rules of musical composition, infants are more likely to track patterns of absolute pitches than of relative pitches. A 3rd experiment tested adults with or without musical training on the same statistical learning tasks used in the infant experiments. Unlike the infants, adult listeners relied primarily on relative pitch cues. These results suggest a shift from an initial focus on absolute pitch to the eventual dominance of relative pitch, which, it is argued, is more useful for both music and speech processing.

  5. Absolute calibration of sniffer probes on Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  6. Temporal Dynamics of Microbial Rhodopsin Fluorescence Reports Absolute Membrane Voltage

    PubMed Central

    Hou, Jennifer H.; Venkatachalam, Veena; Cohen, Adam E.

    2014-01-01

    Plasma membrane voltage is a fundamentally important property of a living cell; its value is tightly coupled to membrane transport, the dynamics of transmembrane proteins, and to intercellular communication. Accurate measurement of the membrane voltage could elucidate subtle changes in cellular physiology, but existing genetically encoded fluorescent voltage reporters are better at reporting relative changes than absolute numbers. We developed an Archaerhodopsin-based fluorescent voltage sensor whose time-domain response to a stepwise change in illumination encodes the absolute membrane voltage. We validated this sensor in human embryonic kidney cells. Measurements were robust to variation in imaging parameters and in gene expression levels, and reported voltage with an absolute accuracy of 10 mV. With further improvements in membrane trafficking and signal amplitude, time-domain encoding of absolute voltage could be applied to investigate many important and previously intractable bioelectric phenomena. PMID:24507604

  7. Absolute Value Boundedness, Operator Decomposition, and Stochastic Media and Equations

    NASA Technical Reports Server (NTRS)

    Adomian, G.; Miao, C. C.

    1973-01-01

    The research accomplished during this period is reported. Published abstracts and technical reports are listed. Articles presented include: boundedness of absolute values of generalized Fourier coefficients, propagation in stochastic media, and stationary conditions for stochastic differential equations.

  8. Absolute calibration of sniffer probes on Wendelstein 7-X.

    PubMed

    Moseev, D; Laqua, H P; Marsen, S; Stange, T; Braune, H; Erckmann, V; Gellert, F; Oosterbeek, J W

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m(2) per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m(2) per MW injected beam power is measured. PMID:27587121

  9. Preparation of an oakmoss absolute with reduced allergenic potential.

    PubMed

    Ehret, C; Maupetit, P; Petrzilka, M; Klecak, G

    1992-06-01

    Synopsis Oakmoss absolute, an extract of the lichen Evernia prunastri, is known to cause allergenic skin reactions due to the presence of certain aromatic aldehydes such as atranorin, chloratranorin, ethyl hematommate and ethyl chlorohematommate. In this paper it is shown that treatment of Oakmoss absolute with amino acids such as lysine and/or leucine, lowers considerably the content of these allergenic constituents including atranol and chloratranol. The resulting Oakmoss absolute, which exhibits an excellent olfactive quality, was tested extensively in comparative studies on guinea pigs and on man. The results of the Guinea Pig Maximization Test (GPMT) and Human Repeated Insult Patch Test (HRIPT) indicate that, in comparison with the commercial test sample, the allergenicity of this new quality of Oakmoss absolute was considerably reduced, and consequently better skin tolerance of this fragrance for man was achieved. PMID:19272096

  10. Absolute Free Energies for Biomolecules in Implicit or Explicit Solvent

    NASA Astrophysics Data System (ADS)

    Berryman, Joshua T.; Schilling, Tanja

    Methods for absolute free energy calculation by alchemical transformation of a quantitative model to an analytically tractable one are discussed. These absolute free energy methods are placed in the context of other methods, and an attempt is made to describe the best practice for such calculations given the current state of the art. Calculations of the equilibria between the four free energy basins of the dialanine molecule and the two right- and left-twisted basins of DNA are discussed as examples.

  11. Heat capacity and absolute entropy of iron phosphides

    SciTech Connect

    Dobrokhotova, Z.V.; Zaitsev, A.I.; Litvina, A.D.

    1994-09-01

    There is little or no data on the thermodynamic properties of iron phosphides despite their importance for several areas of science and technology. The information available is of a qualitative character and is based on assessments of the heat capacity and absolute entropy. In the present work, we measured the heat capacity over the temperature range of 113-873 K using a differential scanning calorimeter (DSC) and calculated the absolute entropy.

  12. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  13. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  14. Absolute beam flux measurement at NDCX-I using gold-melting calorimetry technique

    SciTech Connect

    Ni, P.A.; Bieniosek, F.M.; Lidia, S.M.; Welch, J.

    2011-04-01

    We report on an alternative way to measure the absolute beam flux at the NDCX-I, LBNL linear accelerator. Up to date, the beam flux is determined from the analysis of the beam-induced optical emission from a ceramic scintilator (Al-Si). The new approach is based on calorimetric technique, where energy flux is deduced from the melting dynamics of a gold foil. We estimate an average 260 kW/cm2 beam flux over 5 {micro}s, which is consistent with values provided by the other methods. Described technique can be applied to various ion species and energies.

  15. Evaluating Methods for Constructing Average High-Density Electrode Positions

    PubMed Central

    Richards, John E.; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M.C.

    2014-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel “Geodesic Sensor Net” (GSN; EGI, Inc.), 38 participants with the 128 channel “Hydrocel Geodesic Sensor Net” (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants’ original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants) PMID:25234713

  16. Discrete Averaging Relations for Micro to Macro Transition

    NASA Astrophysics Data System (ADS)

    Liu, Chenchen; Reina, Celia

    2016-05-01

    The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.

  17. Global absolut gravity reference system as replacement of IGSN 71

    NASA Astrophysics Data System (ADS)

    Wilmes, Herbert; Wziontek, Hartmut; Falk, Reinhard

    2015-04-01

    The determination of precise gravity field parameters is of great importance in a period in which earth sciences are achieving the necessary accuracy to monitor and document global change processes. This is the reason why experts from geodesy and metrology joined in a successful cooperation to make absolute gravity observations traceable to SI quantities, to improve the metrological kilogram definition and to monitor mass movements and smallest height changes for geodetic and geophysical applications. The international gravity datum is still defined by the International Gravity Standardization Net adopted in 1971 (IGSN 71). The network is based upon pendulum and spring gravimeter observations taken in the 1950s and 60s supported by the early free fall absolute gravimeters. Its gravity values agreed in every case to better than 0.1 mGal. Today, more than 100 absolute gravimeters are in use worldwide. The series of repeated international comparisons confirms the traceability of absolute gravity measurements to SI quantities and confirm the degree of equivalence of the gravimeters in the order of a few µGal. For applications in geosciences where e.g. gravity changes over time need to be analyzed, the temporal stability of an absolute gravimeter is most important. Therefore, the proposition is made to replace the IGSN 71 by an up-to-date gravity reference system which is based upon repeated absolute gravimeter comparisons and a global network of well controlled gravity reference stations.

  18. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  19. On the absolute calibration of SO2 cameras

    USGS Publications Warehouse

    Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich

    2013-01-01

    This work investigates the uncertainty of results gained through the two commonly used, but quite different, calibration methods (DOAS and calibration cells). Measurements with three different instruments, an SO2 camera, a NFOVDOAS system and an Imaging DOAS (I-DOAS), are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The respective results are compared with measurements from an I-DOAS to verify the calibration curve over the spatial extent of the image. The results show that calibration cells, while working fine in some cases, can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. The measurements presented in this work were taken at Popocatepetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 and 14.34 kg s−1 were observed.

  20. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  1. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  2. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  3. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  4. Spectral and parametric averaging for integrable systems

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Serota, R. A.

    2015-05-01

    We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.

  5. Spatial limitations in averaging social cues.

    PubMed

    Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  6. An ultrasonic system for measurement of absolute myocardial thickness using a single transducer.

    PubMed

    Pitsillides, K F; Longhurst, J C

    1995-03-01

    We have developed an ultrasonic instrument that can measure absolute regional myocardial wall motion throughout the cardiac cycle using a single epicardial piezoelectric transducer. The methods in place currently that utilize ultrasound to measure myocardial wall thickness are the transit-time sonomicrometer (TTS) and, more recently, the Doppler echo displacement method. Both methods have inherent disadvantages. To address the need for an instrument that can measure absolute dimensions of myocardial wall at any depth, an ultrasonic single-crystal sonomicrometer (SCS) system was developed. This system can identify and track the boundary of the endocardial muscle-blood interface. With this instrument, it is possible to obtain, from a single epicardial transducer, measurement of myocardial wall motion that is calibrated in absolute dimensional units. The operating principles of the proposed myocardial dimension measurement system are as follows. A short duration ultrasonic burst having a frequency of 10 MHz is transmitted from the piezoelectric transducer. Reflected echoes are sampled at two distinct time intervals to generate reference and interface sample volumes. During steady state, the two sample volumes are adjusted so that the reference volume remains entirely within the myocardium, whereas half of the interface sampled volume is located within the myocardium. After amplification and filtering, the true root mean square values of both signals are compared and an error signal is generated. A closed-loop circuit uses the integrated error signal to continuously adjust the position of the two sample volumes. We have compared our system in vitro against a known signal and in vivo against the two-crystal TTS system during control, suppression (ischemia), and enhancement (isoproterenol) of myocardial function. Results were obtained in vitro for accuracy (> 99%), signal linearity (r = 0.99), and frequency response to heart rates > 450 beats/min, and in vivo data were

  7. Pre-Launch Absolute Calibration of CCD/CBERS-2B Sensor

    PubMed Central

    Ponzoni, Flávio Jorge; Albuquerque, Bráulio Fonseca Carneiro

    2008-01-01

    Pre-launch absolute calibration coefficients for the CCD/CBERS-2B sensor have been calculated from radiometric measurements performed in a satellite integration and test hall in the Chinese Academy of Space Technology (CAST) headquarters, located in Beijing, China. An illuminated integrating sphere was positioned in the test hall facilities to allow the CCD/CBERS-2B imagery of the entire sphere aperture. Calibration images were recorded and a relative calibration procedure adopted exclusively in Brazil was applied to equalize the detectors responses. Averages of digital numbers (DN) from these images were determined and correlated to their respective radiance levels in order to calculate the absolute calibration coefficients. It has been the first time these pre-launch absolute calibration coefficients have been calculated considering the Brazilian image processing criteria. Now it will be possible to compare them to those that will be calculated from vicarious calibration campaigns. This comparison will permit the CCD/CBERS-2B monitoring and the frequently data updating to the user community.

  8. Verification of 235U mass content in nuclear fuel plates by an absolute method

    NASA Astrophysics Data System (ADS)

    El-Gammal, W.

    2007-01-01

    Nuclear Safeguards is referred to a verification System by which a State can control all nuclear materials (NM) and nuclear activities under its authority. An effective and efficient Safeguards System must include a system of measurements with capabilities sufficient to verify such NM. Measurements of NM using absolute methods could eliminate the dependency on NM Standards, which are necessary for other relative or semi-absolute methods. In this work, an absolute method has been investigated to verify the 235U mass content in nuclear fuel plates of Material Testing Reactor (MTR) type. The most intense gamma-ray signature at 185.7 keV emitted after α-decay of the 235U nuclei was employed in the method. The measuring system (an HPGe-spectrometer) was mathematically calibrated for efficiency using the general Monte Carlo transport code MCNP-4B. The calibration results and the measured net count rate were used to estimate the 235U mass content in fuel plates at different detector-to-fuel plate distances. Two sets of fuel plates, containing natural and low enriched uranium, were measured at the Fuel Fabrication Facility. Average accuracies for the estimated 235U masses of about 2.62% and 0.3% are obtained for the fuel plates containing natural and low enriched uranium; respectively, with a precision of about 3%.

  9. Absolute protein quantification of the yeast chaperome under conditions of heat shock

    PubMed Central

    Mackenzie, Rebecca J.; Lawless, Craig; Holman, Stephen W.; Lanthaler, Karin; Beynon, Robert J.; Grant, Chris M.; Hubbard, Simon J.

    2016-01-01

    Chaperones are fundamental to regulating the heat shock response, mediating protein recovery from thermal‐induced misfolding and aggregation. Using the QconCAT strategy and selected reaction monitoring (SRM) for absolute protein quantification, we have determined copy per cell values for 49 key chaperones in Saccharomyces cerevisiae under conditions of normal growth and heat shock. This work extends a previous chemostat quantification study by including up to five Q‐peptides per protein to improve confidence in protein quantification. In contrast to the global proteome profile of S. cerevisiae in response to heat shock, which remains largely unchanged as determined by label‐free quantification, many of the chaperones are upregulated with an average two‐fold increase in protein abundance. Interestingly, eight of the significantly upregulated chaperones are direct gene targets of heat shock transcription factor‐1. By performing absolute quantification of chaperones under heat stress for the first time, we were able to evaluate the individual protein‐level response. Furthermore, this SRM data was used to calibrate label‐free quantification values for the proteome in absolute terms, thus improving relative quantification between the two conditions. This study significantly enhances the largely transcriptomic data available in the field and illustrates a more nuanced response at the protein level. PMID:27252046

  10. Relative and Absolute Availability of Healthier Food and Beverage Alternatives Across Communities in the United States

    PubMed Central

    Powell, Lisa M.; Rimkus, Leah; Isgor, Zeynep; Barker, Dianne C.; Ohri-Vachaspati, Punam; Chaloupka, Frank

    2014-01-01

    Objectives. We examined associations between the relative and absolute availability of healthier food and beverage alternatives at food stores and community racial/ethnic, socioeconomic, and urban–rural characteristics. Methods. We analyzed pooled, annual cross-sectional data collected in 2010 to 2012 from 8462 food stores in 468 communities spanning 46 US states. Relative availability was the ratio of 7 healthier products (e.g., whole-wheat bread) to less healthy counterparts (e.g., white bread); we based absolute availability on the 7 healthier products. Results. The mean healthier food and beverage ratio was 0.71, indicating that stores averaged 29% fewer healthier than less healthy products. Lower relative availability of healthier alternatives was associated with low-income, Black, and Hispanic communities. Small stores had the largest differences: relative availability of healthier alternatives was 0.61 and 0.60, respectively, for very low-income Black and very low-income Hispanic communities, and 0.74 for very high-income White communities. We found fewer associations between absolute availability of healthier products and community characteristics. Conclusions. Policies to improve the relative availability of healthier alternatives may be needed to improve population health and reduce disparities. PMID:25211721

  11. Predictive error analysis for a water resource management model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  12. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  13. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  14. Method to obtain absolute impurity density profiles combining charge exchange and beam emission spectroscopy without absolute intensity calibrationa)

    NASA Astrophysics Data System (ADS)

    Kappatou, A.; Jaspers, R. J. E.; Delabie, E.; Marchuk, O.; Biel, W.; Jakobs, M. A.

    2012-10-01

    Investigation of impurity transport properties in tokamak plasmas is essential and a diagnostic that can provide information on the impurity content is required. Combining charge exchange recombination spectroscopy (CXRS) and beam emission spectroscopy (BES), absolute radial profiles of impurity densities can be obtained from the CXRS and BES intensities, electron density and CXRS and BES emission rates, without requiring any absolute calibration of the spectra. The technique is demonstrated here with absolute impurity density radial profiles obtained in TEXTOR plasmas, using a high efficiency charge exchange spectrometer with high etendue, that measures the CXRS and BES spectra along the same lines-of-sight, offering an additional advantage for the determination of absolute impurity densities.

  15. Zonal average earth radiation budget measurements from satellites for climate studies

    NASA Technical Reports Server (NTRS)

    Ellis, J. S.; Haar, T. H. V.

    1976-01-01

    Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.

  16. A temperature error correction method for a naturally ventilated radiation shield

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Rrenhui

    2016-11-01

    Due to solar radiation exposure, air flowing inside a naturally ventilated radiation shield may produce a measurement error of 0.8 °C or higher. To improve the air temperature observation accuracy, a temperature error correction method is proposed. The correction method is based on a Computational Fluid Dynamics (CFD) method and a Genetic Algorithm (GA) method. The CFD method is implemented to analyze and calculate the temperature errors of a naturally ventilated radiation shield under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean temperature error given by measurements is 0.36 °C, and the mean temperature error given by correction equation is 0.34 °C. This correction equation allows the temperature error to be reduced by approximately 95%. The mean absolute error (MAE) and the root mean square error (RMSE) between the temperature errors given by the correction equation and the temperature errors given by the measurements are 0.07 °C and 0.08 °C, respectively.

  17. Improving medication administration error reporting systems. Why do errors occur?

    PubMed

    Wakefield, B J; Wakefield, D S; Uden-Holman, T

    2000-01-01

    Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).

  18. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  19. Retrieving sea surface salinity with multiangular L-band brightness temperatures: Improvement by spatiotemporal averaging

    NASA Astrophysics Data System (ADS)

    Camps, A.; Vall-Llossera, M.; Batres, L.; Torres, F.; Duffo, N.; Corbella, I.

    2005-04-01

    The Soil Moisture and Ocean Salinity (SMOS) mission was selected in May 1999 by the European Space Agency to provide global and frequent soil moisture and sea surface salinity maps. SMOS' single payload is Microwave Imaging Radiometer by Aperture Synthesis (MIRAS), an L band two-dimensional aperture synthesis interferometric radiometer with multiangular observation capabilities. Most geophysical parameter retrieval errors studies have assumed the independence of measurements both in time and space so that the standard deviation of the retrieval errors decreases with the inverse of square root of the number of measurements being averaged. This assumption is especially critical in the case of sea surface salinity (SSS), where spatiotemporal averaging is required to achieve the ultimate goal of 0.1 psu error. This work presents a detailed study of the SSS error reduction by spatiotemporal averaging, using the SMOS end-to-end performance simulator (SEPS), including thermal noise, all instrumental error sources, current error correction and image reconstruction algorithms, and correction of atmospheric and sky noises. The most important error sources are the biases that appear in the brightness temperature images. Three different sources of biases have been identified: errors in the noise injection radiometers, Sun contributions to the antenna temperature, and imaging under aliasing conditions. A calibration technique has been devised to correct these biases prior to the SSS retrieval at each satellite overpass. Simulation results show a retrieved salinity error of 0.2 psu in warm open ocean, and up to 0.7 psu at high latitudes and near the coast, where the external calibration method presents more difficulties.

  20. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E